WO2022126427A1 - Procédé de traitement de nuage de points, appareil de traitement de nuage de points, plateforme mobile, et support de stockage informatique - Google Patents

Procédé de traitement de nuage de points, appareil de traitement de nuage de points, plateforme mobile, et support de stockage informatique Download PDF

Info

Publication number
WO2022126427A1
WO2022126427A1 PCT/CN2020/136819 CN2020136819W WO2022126427A1 WO 2022126427 A1 WO2022126427 A1 WO 2022126427A1 CN 2020136819 W CN2020136819 W CN 2020136819W WO 2022126427 A1 WO2022126427 A1 WO 2022126427A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
data
processing method
point
density distribution
Prior art date
Application number
PCT/CN2020/136819
Other languages
English (en)
Chinese (zh)
Inventor
夏清
李延召
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/136819 priority Critical patent/WO2022126427A1/fr
Priority to CN202080070978.9A priority patent/CN114556427A/zh
Publication of WO2022126427A1 publication Critical patent/WO2022126427A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/514Depth or shape recovery from specularities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present invention generally relates to the technical field of ranging devices, and more particularly, to a point cloud processing method, a point cloud processing device, a movable platform and a computer storage medium.
  • the distribution of the laser point cloud is always spatially inhomogeneous due to the lidar scanning mechanism. Due to the limitation of the lidar optical scanning mechanism, in the generated 3D point cloud data, due to its own line scanning characteristics, there must be an uneven distribution of point clouds on the same plane. Due to the existence of these objective factors, in the same scene, the same object is in the same plane, or the number and density of points in the 3D point cloud obtained by scanning the same object at different distances are different. This difference will lead to subsequent The detection, segmentation, tracking and other algorithms are greatly affected.
  • one aspect of the present invention provides a point cloud processing method, comprising: acquiring point cloud data collected by a ranging device and point cloud density distribution data of the ranging device, where the point cloud density distribution data is the same as the point cloud density distribution data.
  • the scanning method of the ranging device is related, wherein the point cloud density distribution data includes multiple significance coefficients, and the multiple significance coefficients are used to characterize the distribution characteristics of the point cloud data on the reference plane.
  • the saliency coefficients corresponding to the mapping points of the point cloud data on the reference plane and determine a predetermined processing method of the point cloud data, wherein the predetermined processing methods include a first processing method and a second processing method At least one of, the first processing method is used to increase the point cloud density in at least a partial area in the point cloud data, and the second processing method is used to reduce the point cloud data in at least a partial area. Cloud density; processing the point cloud data according to the predetermined processing method.
  • the point cloud processing apparatus includes: a memory for storing executable instructions; a processor for executing the instructions stored in the memory, so that the processor executes The following steps: acquiring point cloud data collected by the ranging device and point cloud density distribution data of the ranging device, where the point cloud density distribution data is related to the scanning mode of the ranging device, wherein the point cloud
  • the density distribution data includes a plurality of significance coefficients, and the plurality of significance coefficients are used to characterize the distribution characteristics of the mapped points of the point cloud data on the reference plane;
  • the corresponding significance coefficient determines a predetermined processing method of the point cloud data, wherein the predetermined processing method includes at least one of a first processing method and a second processing method, and the first processing method is used for increasing the point cloud density in at least a part of the point cloud data, and the second processing method is used to reduce the point cloud density in at least part of the point cloud data; according to the predetermined processing method, Cloud data for processing.
  • the movable platform includes: a movable platform body, at least one ranging device and the aforementioned point cloud processing device; at least one ranging device is disposed on the movable platform body , which is used to collect point cloud data of the target scene.
  • Another aspect of the present invention provides a computer storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the aforementioned point cloud processing method.
  • the point cloud processing method of the embodiment of the present invention on the premise of keeping the hardware cost unchanged, by acquiring the point cloud density distribution data including a plurality of significance coefficients of the ranging device, according to the point cloud data in the
  • the saliency coefficients corresponding to the mapped points on the surface are used to determine the predetermined processing method of the point cloud data, which can effectively combine the scanning characteristics of the scanning system of the ranging device and the spatial distribution of the point cloud.
  • the cloud data is processed in a suitable manner, thereby effectively improving the distribution uniformity of the point cloud data collected by the ranging device, and can effectively reduce the influence of the point cloud distribution generated by the scanning system on the point cloud density.
  • the point cloud data processed by the method can be used as the basis for subsequent algorithms, so that the subsequent algorithms can be more accurate and reduce processing errors caused by uneven distribution of point clouds, and the method of the present application does not increase hardware costs.
  • FIG. 1 shows a schematic structural diagram of a ranging apparatus in an embodiment of the present invention
  • FIG. 2 shows a schematic diagram of a distance measuring device in an embodiment of the present invention
  • FIG. 3 shows a schematic diagram of a scanning pattern of a ranging device in an embodiment of the present invention
  • FIG. 4 shows a schematic diagram of a scanning pattern of a distance measuring device in another embodiment of the present invention.
  • FIG. 5 shows a schematic flowchart of a point cloud processing method in an embodiment of the present invention
  • FIG. 6 shows a schematic diagram of a point cloud distribution saliency map of a ranging device with a first scanning mode in an embodiment of the present invention
  • FIG. 7 shows a schematic diagram of a point cloud distribution saliency map of a ranging device with a second scanning mode according to an embodiment of the present invention
  • FIG. 8 shows a schematic block diagram of a point cloud processing apparatus in an embodiment of the present invention.
  • FIG. 9 shows a schematic block diagram of a movable platform in an embodiment of the present invention.
  • the ranging device includes a laser radar.
  • the ranging device is only used as an example. For other suitable ranging devices Also applicable to this application.
  • the ranging device can be an electronic device such as a laser radar or a laser ranging device.
  • the ranging device is used to sense external environmental information, for example, distance information, orientation information, reflection intensity information, speed information and the like of environmental objects.
  • the ranging device can detect the distance from the detected object to the ranging device by measuring the time of light propagation between the ranging device and the detected object, that is, Time-of-Flight (TOF).
  • TOF Time-of-Flight
  • the ranging device can also detect the distance from the detected object to the ranging device through other technologies, such as a ranging method based on phase shift measurement, or a ranging method based on frequency shift measurement. This does not limit.
  • the ranging apparatus 100 includes a transmitting module, a scanning module and a detection module, the transmitting module is used for transmitting a sequence of optical pulses to detect a target scene; the scanning module is used for sequentially changing the propagation path of the optical pulse sequence transmitted by the transmitting module.
  • the detection module is used for receiving the light pulse sequence reflected back by the object, and determining the distance and/or the distance of the object relative to the ranging device according to the reflected light pulse sequence. Orientation to generate the point cloud points.
  • the scanning module is also called the scanning system.
  • the transmitting module includes a transmitting circuit 110 ; the detecting module includes a receiving circuit 120 , a sampling circuit 130 and an arithmetic circuit 140 .
  • the transmit circuit 110 may emit a sequence of light pulses (eg, a sequence of laser pulses).
  • the receiving circuit 120 can receive the optical pulse sequence reflected by the object to be detected, that is, obtain the pulse waveform of the echo signal through it, and perform photoelectric conversion on the optical pulse sequence to obtain an electrical signal, and then process the electrical signal to obtain an electrical signal. output to the sampling circuit 130 .
  • the sampling circuit 130 may sample the electrical signal to obtain a sampling result.
  • the arithmetic circuit 140 may determine the distance, that is, the depth, between the distance measuring device 100 and the detected object based on the sampling result of the sampling circuit 130 .
  • the distance measuring device 100 may further include a control circuit 150, which can control other circuits, for example, can control the working time of each circuit and/or set parameters for each circuit.
  • a control circuit 150 can control other circuits, for example, can control the working time of each circuit and/or set parameters for each circuit.
  • the distance measuring device shown in FIG. 1 includes a transmitting circuit, a receiving circuit, a sampling circuit and an arithmetic circuit for emitting a beam of light for detection
  • the embodiment of the present application is not limited to this, the transmitting circuit
  • the number of any one of the receiving circuits, sampling circuits, and arithmetic circuits may also be at least two, for emitting at least two light beams in the same direction or in different directions respectively; wherein, the at least two light beam paths can be simultaneously
  • the ejection can also be ejected at different times.
  • the light-emitting chips in the at least two emission circuits are packaged in the same module.
  • each emitting circuit includes one laser emitting chip, and the dies in the laser emitting chips in the at least two emitting circuits are packaged together and accommodated in the same packaging space.
  • the distance measuring device 100 may further include a scanning module for changing the propagation direction of at least one optical pulse sequence (eg, a laser pulse sequence) output from the transmitting circuit to output the field of view. to scan.
  • a scanning module for changing the propagation direction of at least one optical pulse sequence (eg, a laser pulse sequence) output from the transmitting circuit to output the field of view. to scan.
  • the scanning area of the scanning module within the field of view of the ranging device increases over time.
  • the module including the transmitting circuit 110, the receiving circuit 120, the sampling circuit 130 and the operation circuit 140, or the module including the transmitting circuit 110, the receiving circuit 120, the sampling circuit 130, the operation circuit 140 and the control circuit 150 may be referred to as the measuring circuit A ranging module, which can be independent of other modules, such as a scanning module.
  • a coaxial optical path may be used in the ranging device, that is, the light beam emitted by the ranging device and the reflected light beam share at least part of the optical path in the ranging device.
  • the laser pulse sequence reflected by the detection object passes through the scanning module and then enters the receiving circuit.
  • the distance-measuring device may also adopt an off-axis optical path, that is, the light beam emitted by the distance-measuring device and the reflected light beam are respectively transmitted along different optical paths in the distance-measuring device.
  • FIG. 2 shows a schematic diagram of an embodiment in which the distance measuring device of the present invention adopts a coaxial optical path.
  • the ranging apparatus 200 includes a ranging module 210, and the ranging module 210 includes a transmitter 203 (which may include the above-mentioned transmitting circuit), a collimating element 204, a detector 205 (which may include the above-mentioned receiving circuit, sampling circuit and arithmetic circuit) and Optical path changing element 206 .
  • the ranging module 210 is used for emitting a light beam, receiving the returning light, and converting the returning light into an electrical signal.
  • the transmitter 203 can be used to transmit a sequence of optical pulses.
  • the transmitter 203 may emit a sequence of laser pulses.
  • the laser beam emitted by the transmitter 203 is a narrow bandwidth beam with a wavelength outside the visible light range.
  • the collimating element 204 is disposed on the outgoing light path of the transmitter, and is used for collimating the light beam emitted from the transmitter 203, and collimating the light beam emitted by the transmitter 203 into parallel light and outputting to the scanning module.
  • the collimating element also serves to converge at least a portion of the return light reflected by the probe.
  • the collimating element 204 may be a collimating lens or other elements capable of collimating light beams.
  • the transmitting optical path and the receiving optical path in the ranging device are combined by the optical path changing element 206 before the collimating element 204, so that the transmitting optical path and the receiving optical path can share the same collimating element, so that the optical path more compact.
  • the emitter 203 and the detector 205 may use respective collimating elements, and the optical path changing element 206 may be arranged on the optical path behind the collimating element.
  • the optical path changing element can use a small-area reflective mirror to The transmit light path and the receive light path are combined.
  • the optical path changing element may also use a reflector with a through hole, wherein the through hole is used to transmit the outgoing light of the emitter 203 , and the reflector is used to reflect the return light to the detector 205 . In this way, in the case of using a small reflector, the occlusion of the return light by the support of the small reflector can be reduced.
  • the optical path altering element is offset from the optical axis of the collimating element 204 .
  • the optical path altering element may also be located on the optical axis of the collimating element 204 .
  • the ranging device 200 further includes a scanning module 202 .
  • the scanning module 202 is placed on the outgoing optical path of the ranging module 210 .
  • the scanning module 202 is used to change the transmission direction of the collimated beam 219 emitted by the collimating element 204 and project it to the external environment, and project the return light to the collimating element 204 .
  • the returned light is focused on the detector 205 through the collimating element 204 .
  • the scanning module 202 may include at least one optical element for changing the propagation path of the light beam, wherein the optical element may change the light beam propagation path by reflecting, refracting, diffracting the light beam, etc., such as
  • the optical element includes at least one light-refractive element having non-parallel exit and entrance surfaces.
  • the scanning module 202 includes lenses, mirrors, prisms, galvanometers, gratings, liquid crystals, optical phased arrays (Optical Phased Array) or any combination of the above optical elements.
  • At least part of the optical elements are moving, for example, the at least part of the optical elements are driven to move by a driving module, and the moving optical elements can reflect, refract or diffract the light beam to different directions at different times.
  • the multiple optical elements of the scanning module 202 may be rotated or oscillated about a common axis 209, each rotating or oscillating optical element being used to continuously change the propagation direction of the incident beam.
  • the plurality of optical elements of the scanning module 202 may rotate at different rotational speeds, or vibrate at different speeds.
  • at least some of the optical elements of scan module 202 may rotate at substantially the same rotational speed.
  • the plurality of optical elements of the scanning module may also be rotated about different axes. In some embodiments, the plurality of optical elements of the scanning module may also rotate in the same direction, or rotate in different directions; or vibrate in the same direction, or vibrate in different directions, which are not limited herein.
  • the scanning module 202 includes a first optical element 214 and a driver 216 connected to the first optical element 214, and the driver 216 is used to drive the first optical element 214 to rotate around the rotation axis 209, so that the first optical element 214 changes The direction of the collimated beam 219.
  • the first optical element 214 projects the collimated beam 219 in different directions.
  • the angle between the direction of the collimated light beam 219 changed by the first optical element and the rotation axis 209 changes with the rotation of the first optical element 214 .
  • the first optical element 214 includes a pair of opposing non-parallel surfaces through which the collimated beam 219 passes.
  • the first optical element 214 includes a prism whose thickness varies along at least one radial direction.
  • the first optical element 214 includes a wedge prism that refracts the collimated light beam 219 .
  • the scanning module 202 further includes a second optical element 215 , the second optical element 215 rotates around the rotation axis 209 , and the rotation speed of the second optical element 215 is different from the rotation speed of the first optical element 214 .
  • the second optical element 215 is used to change the direction of the light beam projected by the first optical element 214 .
  • the second optical element 215 is connected to another driver 217, and the driver 217 drives the second optical element 215 to rotate.
  • the first optical element 214 and the second optical element 215 can be driven by the same or different drivers, so that the rotational speed and/or steering of the first optical element 214 and the second optical element 215 are different, thereby projecting the collimated beam 219 into the external space Different directions can scan a larger spatial range.
  • the controller 218 controls the drivers 216 and 217 to drive the first optical element 214 and the second optical element 215, respectively.
  • the rotational speeds of the first optical element 214 and the second optical element 215 may be determined according to the area and pattern expected to be scanned in practical applications.
  • Drives 216 and 217 may include motors or other drives.
  • the second optical element 215 includes a pair of opposing non-parallel surfaces through which the light beam passes.
  • the second optical element 215 comprises a prism whose thickness varies along at least one radial direction.
  • the second optical element 215 comprises a wedge prism.
  • the scanning module 202 further includes a third optical element (not shown) and a driver for driving the movement of the third optical element.
  • the third optical element includes a pair of opposing non-parallel surfaces through which the light beam passes.
  • the third optical element comprises a prism of varying thickness along at least one radial direction.
  • the third optical element comprises a wedge prism. At least two of the first, second and third optical elements rotate at different rotational speeds and/or rotations.
  • the scanning module includes two or three of the light refraction elements sequentially arranged on the outgoing light path of the light pulse sequence.
  • at least two of the light refraction elements in the scanning module are rotated during the scanning process to change the direction of the light pulse sequence.
  • the scanning paths of the scanning module are different at least at some different times.
  • the rotation of each optical element in the scanning module 202 can project light in different directions, such as the direction of the projected light 211 and the direction 213 . space to scan.
  • the light 211 projected by the scanning module 202 hits the detected object 201 , a part of the light is reflected by the detected object 201 to the distance measuring device 200 in a direction opposite to the projected light 211 .
  • the returning light 212 reflected by the probe 201 passes through the scanning module 202 and then enters the collimating element 204 .
  • a detector 205 is placed on the same side of the collimating element 204 as the emitter 203, and the detector 205 is used to convert at least part of the return light passing through the collimating element 204 into an electrical signal.
  • each optical element is coated with an anti-reflection coating.
  • the thickness of the anti-reflection film is equal to or close to the wavelength of the light beam emitted by the emitter 203, which can increase the intensity of the transmitted light beam.
  • a filter layer is coated on the surface of an element located on the beam propagation path in the distance measuring device, or a filter is provided on the beam propagation path for transmitting at least the wavelength band of the light beam emitted by the transmitter, Reflects other bands to reduce noise from ambient light to the receiver.
  • the transmitter 203 may comprise a laser diode through which laser pulses are emitted on the nanosecond scale.
  • the laser pulse receiving time can be determined, for example, by detecting the rising edge time and/or the falling edge time of the electrical signal pulse to determine the laser pulse receiving time.
  • the ranging apparatus 200 can calculate the TOF by using the pulse receiving time information and the pulse sending time information, so as to determine the distance from the probe 201 to the ranging apparatus 200 .
  • the distance and orientation detected by the ranging device 200 can be used for remote sensing, obstacle avoidance, mapping, modeling, navigation, and the like.
  • the point cloud scanning pattern is generated by the design of a scanning system (also referred to as a scanning module in this paper) of a ranging device such as a lidar, and the influencing factors include the scanning frequency, frame rate, motor speed, and speed ratio of the scanning system.
  • a scanning system also referred to as a scanning module in this paper
  • the influencing factors include the scanning frequency, frame rate, motor speed, and speed ratio of the scanning system.
  • Different scanning systems will show different sampling patterns due to motor speed settings. For example, one scanning system obtains a specific petal-shaped scanning pattern, as shown in Figure 3, while another scanning system obtains eye-type scanning patterns.
  • the scanning pattern with dense and sparse sides in the middle is shown in Figure 4.
  • the scanning pattern herein may refer to a pattern formed by the accumulation of scanning trajectories of a light beam within a scanning field of view over a period of time.
  • the balance between the hardware cost and the scanning method will be considered. Therefore, if the scanning system is modified from the hardware level, the hardware cost and development time will be increased, which will significantly improve the product. research and development costs. In practical applications, the point cloud in the three-dimensional space can be interpolated through the software level, but the accuracy and adaptability of the interpolation is poor, which is easy to bring errors.
  • the present application provides a point cloud processing method
  • the point cloud processing method includes: acquiring point cloud data collected by a ranging device and point cloud density distribution data of the ranging device, the point cloud The density distribution data is related to the scanning mode of the ranging device, wherein the point cloud density distribution data includes a plurality of significance coefficients, and the plurality of significance coefficients are used to characterize the mapping points of the point cloud data on the reference surface according to the saliency coefficients corresponding to the mapped points of the point cloud data on the reference plane, determine a predetermined processing method of the point cloud data, wherein the predetermined processing method includes the first processing method At least one of the method and the second processing method, the first processing method is used to increase the point cloud density in at least a partial area in the point cloud data, and the second processing method is used to reduce the point cloud data in the point cloud data. point cloud density in at least a part of the area; processing the point cloud data according to the predetermined processing manner.
  • the point cloud processing method of the embodiment of the present invention on the premise of keeping the hardware cost unchanged, by acquiring the point cloud density distribution data including a plurality of significance coefficients of the ranging device, according to the point cloud data in the
  • the saliency coefficients corresponding to the mapped points on the surface are used to determine the predetermined processing method of the point cloud data, which can effectively combine the scanning characteristics of the scanning system of the ranging device and the spatial distribution of the point cloud.
  • the cloud data is processed in a suitable way, so that the distribution of the processed point cloud data is more uniform and reasonable, which can effectively reduce the impact of the point cloud distribution generated by the scanning system on the point cloud density, and better describe the scanning scene.
  • the point cloud data processed by the method of the present application can be used as the basis for the subsequent algorithm, so that the subsequent algorithm can be more accurate, reduce processing errors caused by uneven distribution of the point cloud, and the method of the present application does not increase hardware cost.
  • FIG. 5 shows a schematic flowchart of the point cloud processing method in an embodiment of the present application.
  • the point cloud processing method of the embodiment of the present application includes the following steps S501 to S503:
  • step S501 the point cloud data collected by the ranging device and the point cloud density distribution data of the ranging device are acquired.
  • the point cloud density distribution data is related to the scanning mode of the ranging device, wherein the point cloud density distribution data includes multiple significance coefficients, and the multiple significance coefficients are used to characterize the point cloud data on the reference surface.
  • the reference plane can be any suitable plane, for example, the reference plane is a plane perpendicular to the central axis of the light pulse sequence emitted by the ranging device.
  • Different scanning patterns will generate corresponding three-dimensional point cloud distribution in three-dimensional space, and the distribution characteristics are mainly reflected in the density distribution of the point cloud and the shape distribution of the point cloud.
  • the spatial distribution of point clouds is dynamic and the density is not uniform.
  • acquiring the point cloud data collected by the ranging device and the point cloud density distribution data of the ranging device includes: acquiring at least one frame of scanning pattern corresponding to the scanning mode of the ranging device, according to The number of mapping points on the scanning pattern in different statistical regions determines the significance coefficient of the point cloud corresponding to the mapping points in each statistical region.
  • one frame of point cloud data can be selected, wherein the scanning pattern It can be used to characterize the mapping points of the three-dimensional point cloud data of the ranging device on the reference surface.
  • the scanning pattern can be composed of the mapping points of the point cloud data of the ranging device on the reference surface.
  • the statistical area The greater the number of internal mapping points, the greater the density of points in the statistical area.
  • the corresponding scanning pattern is shown in Figure 3
  • a ranging device with a second scanning method is used.
  • the corresponding scanning pattern of the device is shown in Figure 4. According to the scanning pattern, the distribution characteristics of the 3D point cloud on the reference surface can be obtained.
  • the scanning pattern is obtained by mapping point cloud data (such as a three-dimensional point cloud) output by the ranging device to the reference surface, for example, a three-dimensional point cloud output by the ranging device scanning a target scene in a three-dimensional space
  • the distribution is obtained by mapping the distribution to the reference surface, or the scanning pattern is obtained by fitting a function according to the scanning mode of the ranging device.
  • point cloud data of, for example, 10 Hz, or point cloud data of other suitable frame rates may be selected for scanning pattern selection.
  • Scanning patterns based on point clouds can extract pattern features, such as extracting density features, such as the non-repeatability and scanning characteristics of lidar scanning systems.
  • the distribution of point clouds in three-dimensional space is uneven and has obvious density distribution characteristics.
  • the point cloud distribution of the scanning pattern shown in Figure 3 is generally circular, with dense middle density and sparse edges, while the point cloud distribution of the VT scanning pattern shown in Figure 3 is approximately rectangular: the central area of the density distribution is dense , sparse on both sides. Therefore, the characteristics of the scanning pattern can be well described by the distribution characteristics of the density in space.
  • determining the significance coefficient corresponding to each statistical region according to the number of the mapping points on the scanning pattern in different statistical regions includes: normalizing the number of mapping points in the plurality of statistical regions to obtain the significance coefficients of the point clouds corresponding to the mapped points in each statistical area, for example, uniformly map the quantities in the statistical area to the [0, 1] interval, wherein any number well-known to those skilled in the art can be used.
  • the normalization method is to perform normalization processing on the number of mapping points in a plurality of the statistical regions. Through the normalization processing, the comparison of the quantities of different magnitudes can be facilitated.
  • the size of the statistical area is different, and different point cloud density distribution data can be obtained.
  • the size of the statistical area can be determined based on any suitable rules. For example, the size of one or more statistical areas can be determined based on the application scenario of the ranging device and the size of the detection target in the application scenario, and the size of different statistical areas can be determined based on the size of the statistical area. , to obtain different point cloud density distribution data.
  • the size of the statistical area is determined based on the size of the target object that the ranging device is intended to detect, wherein the target object includes a first target object and a second target object, and the first target object is larger than the size of the second target object, then the statistical area of the point cloud density distribution data corresponding to the first target object has the first size, and the point cloud density distribution data corresponding to the second target object has the first size.
  • the statistical region has a second size, the first size being larger than the second size.
  • the application scenarios of ranging devices such as lidar are located indoors or in parks, etc.
  • the size of the area, wherein the sizes of different statistical areas can be made to correspond to the size of a target object respectively.
  • the ranging device is usually used for vehicle identification, obstacle identification, etc.
  • the size of various statistical regions can be determined to be used for calculating the significance respectively.
  • the coefficients are used to obtain point cloud density distribution data for the identification of various objects.
  • the point cloud density distribution data is presented in the form of a heatmap, wherein different pixel values in the heatmap are used to characterize different significance coefficients. For example, the larger the pixel value is, the larger the saliency coefficient of the representation is, and the higher the density of the corresponding position points is.
  • the heat map includes a first pixel point and a second pixel point, and the first pixel The pixel value of the point is greater than the pixel value of the second pixel point, then the significance coefficient corresponding to the first pixel point is greater than the significance coefficient corresponding to the second pixel point; The smaller the significance coefficient of , the smaller the density of the corresponding position points.
  • the heat map includes a first pixel point and a second pixel point, and the pixel value of the first pixel point is greater than that of the second pixel point.
  • the significance coefficient corresponding to the first pixel point is smaller than the significance coefficient corresponding to the second pixel point.
  • the pixel value includes a gray value, a color value or a brightness value or other image-related values.
  • the reference surface may have a plurality of statistical regions
  • the point cloud density distribution data includes at least two types of point cloud density distribution data, wherein different types of point cloud density distribution data have different statistical regions size.
  • the point cloud density distribution data corresponding to various statistical area sizes as shown in Figure 6, the point cloud density distribution data is a saliency map (that is, a heat map), and the nine maps from the first row to the third row represent the respective Select 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300 unit sizes (for example, it can be in square millimeters, square centimeters or square meters, which can be set reasonably according to actual needs)
  • a saliency map obtained by counting the distribution density of the point cloud in the area, the greater the brightness, the greater the density of the point cloud at the point location.
  • the gray-scale saliency map can also be colored to obtain a colored saliency map.
  • the point cloud density distribution data corresponding to various statistical area sizes as shown in Figure 7, the point cloud density distribution data is a saliency map (that is, a heat map), and the six maps are represented from the first row to the third row.
  • Select 10, 40, 80, 100, 200, 300 unit sizes for example, it can be in square millimeters, square centimeters or square meters, which can be set reasonably according to actual needs) as the significance obtained by the distribution density of point clouds in the statistical area.
  • the greater the brightness the greater the density of the point cloud at the point location.
  • the gray-scale saliency map can also be colored to obtain a colored saliency map.
  • Fig. 6 and Fig. 7 It can be seen from Fig. 6 and Fig. 7 that different scanning methods will generate different saliency maps, and different saliency maps can be obtained with different sizes of the selected statistical regions. Therefore, the application can freely choose the appropriate saliency map according to the actual scene of the user.
  • the saliency coefficient is calculated according to the size of the statistical area, so that the characteristics of the real objects in the 3D scene can be more accurately reflected, which is more conducive to the application of subsequent detection, segmentation and other algorithms.
  • the reference surface has a plurality of statistical regions
  • the point cloud density distribution data includes at least two types of point cloud density distribution data, wherein different types of point cloud density distribution data have different statistical region sizes, that is, each Types of point cloud density distribution data can have significance maps for many different statistical region sizes.
  • the ranging device When the ranging device is used to scan the predetermined scene, acquiring point cloud density distribution data corresponding to the point cloud data collected by the ranging device according to the scanning mode of the ranging device, further comprising: according to the scanning method method and the relationship between the size of the target object and the size of the statistical area predetermined to be identified from the point cloud data, to determine the point cloud density distribution data such as a heat map to be used for preprocessing the point cloud data, for example,
  • the point cloud density distribution data to be used for preprocessing the point cloud data is the point cloud density distribution data whose size of the statistical area is less than or equal to the size of the target object, wherein, preferably, the point cloud density distribution data to be used for the preprocessing of the point cloud data
  • the preprocessed point cloud density distribution data is point cloud density distribution data whose size of the statistical area is substantially equal to the size of the target object.
  • the heat map with the smaller size of the divided statistical area is selected, and when larger objects are sensed, the heat map with the larger size of the divided statistical area is selected. picture. Therefore, when different targets are perceived through point cloud data, point cloud density distribution data such as heat maps with different statistical area sizes can be used accordingly.
  • a predetermined processing method of the point cloud data is determined according to the saliency coefficients corresponding to the mapped points of the point cloud data on the reference surface, wherein the predetermined
  • the processing method includes at least one of a first processing method and a second processing method, the first processing method is used to increase the point cloud density in at least a partial area of the point cloud data, and the second processing method is used to Decrease the point cloud density in at least a portion of the point cloud data.
  • the point cloud density distribution data to be used such as a heat map
  • the significance coefficient corresponding to the point cloud in the point cloud data can be obtained, that is, the point cloud of the point cloud data.
  • the significance coefficient of the corresponding point in the heat map (that is, the corresponding mapping point on the reference surface), exemplarily, according to the point cloud data corresponding to the mapping point on the reference surface.
  • the significance coefficient is determined, and the predetermined processing mode of the point cloud data is determined, including: according to the depth information represented by the point cloud in the point cloud data and the corresponding significance coefficient, determining the space of the point cloud in the point cloud data.
  • the spatial distribution attribute determines the predetermined processing method of the point cloud data.
  • the attributes of the point cloud in the 3D space can be effectively expressed.
  • the depth information includes, for example, the horizontal distance of the point cloud in the three-dimensional point cloud data from the position of the scanning system.
  • the saliency coefficients of points that do not belong to the same object can be distinguished. Specifically, assuming that the spatial coordinates of the ith point in the three-dimensional space are (x i , y i , z i ), then the saliency attribute (that is, the spatial distribution attribute) of the point can be calculated by the following formula:
  • S(y i , z i ) represents the saliency coefficient of the 3D point cloud at (y i , z i ) on the 2D projected YOZ plane (that is, the reference plane), and xi represents the level of the 3D point from the position of the scanning system distance.
  • Formula (1) can effectively combine the saliency map calculated above with the actual position of the three-dimensional point cloud in space, and can effectively express the attributes of the point cloud in three-dimensional space.
  • the above formula can also be reasonably adjusted as required, for example, the saliency coefficient and the horizontal distance between the three-dimensional point and the position of the scanning system can be divided, so as to obtain the spatial distribution attribute.
  • the spatial coordinates and reflectivity information of point clouds are usually used. These information are less informative for detection or segmentation algorithms, which are easy to cause false detection and require huge amounts of data. .
  • the spatial distribution attributes of the 3D point cloud can be calculated by the aforementioned method. For subsequent segmentation and detection algorithms, it is equivalent to adding a one-dimensional feature output, which can more accurately represent the deep characteristics of the original data.
  • the saliency attribute calculated by the present invention can not only be used as the one-dimensional feature input of the algorithm, but also can be used as a reference for algorithms such as detection and segmentation.
  • the subsequent algorithm can use this feature to perform selective up-sampling and down-sampling operations on the 3D point cloud in the space, so that the distribution of the point cloud in the space is more regular.
  • the significance coefficient corresponding to the first part of the point cloud in the point cloud data is within the first threshold range, it can be determined that the predetermined processing method of the first part of the point cloud is the first processing method, and the first processing The method includes one of the following processing methods: interpolation, upsampling, and time accumulation.
  • the density of the point cloud can be increased.
  • the significance coefficient corresponding to the second part of the point cloud in the point cloud data is at the first threshold
  • it can be determined that the predetermined processing method of the first part of the point cloud is the second processing method.
  • the second processing method includes downsampling, or no processing is performed.
  • the second processing method can reduce the density of the point cloud or Does not change the point cloud density.
  • the first threshold range and the second threshold range may be reasonably set according to actual needs, and are not specifically limited herein.
  • determining the predetermined processing method of the point cloud data according to the spatial distribution attribute includes: when the spatial distribution attribute corresponding to the first part of the point cloud in the point cloud data is within a first threshold range , determine that the predetermined processing method of the first part of the point cloud is the first processing method, and the first processing method includes one of the following processing methods: interpolation, upsampling, and time accumulation. Increase the density of the point cloud; when the spatial distribution attribute corresponding to the second part of the point cloud in the point cloud data is within the second threshold range, determine that the predetermined processing method of the second part of the point cloud is the first Two processing modes, the second processing mode includes downsampling, or no processing is performed, and the point cloud density can be reduced or not changed through the second processing mode.
  • the first threshold range and the second threshold range may be reasonably set according to actual needs, and are not specifically limited herein.
  • the point cloud processing method of the embodiment of the present invention on the premise of keeping the hardware cost unchanged, by acquiring the point cloud density distribution data including a plurality of significance coefficients of the ranging device, according to the point cloud density distribution data.
  • the saliency coefficient corresponding to the mapping point of the cloud data on the reference plane determines the predetermined processing method of the point cloud data, which can effectively combine the scanning characteristics of the scanning system of the ranging device and the spatial distribution of the point cloud,
  • the spatially distributed point cloud data is processed in a suitable way, so that the processed point cloud data distribution is more uniform and reasonable, which can effectively reduce the impact of the point cloud distribution generated by the scanning system on the point cloud density, and better.
  • the object information in the scanned scene is described, and the point cloud data processed by the method of the present application can be used as the basis of the subsequent algorithm, so that the subsequent algorithm can be more accurate, and the processing errors caused by the uneven distribution of the point cloud can be reduced.
  • the method does not increase the hardware cost.
  • the method of the present application also combines the saliency coefficient with the actual spatial position in the three-dimensional scene, which can more reasonably and effectively represent the spatial attributes of the three-dimensional point cloud, and can more accurately reflect the characteristics of the real objects in the three-dimensional scene. It is beneficial to the application of subsequent detection, segmentation and other algorithms.
  • FIG. 8 shows a schematic block diagram of the point cloud processing apparatus in an embodiment of the present invention.
  • the point cloud processing apparatus 800 further includes one or more processors 802 and one or more memories 801 , and the one or more processors 802 work together or individually.
  • the point cloud processing device may further include at least one of an input device (not shown), an output device (not shown) and an image sensor (not shown), and these components are connected through a bus system and/or other forms of A connection mechanism (not shown) interconnects.
  • the memory 801 is used for storing program instructions executable by the processor, for example, for storing corresponding steps and program instructions for implementing the point cloud processing method according to the embodiment of the present application.
  • One or more computer program products may be included, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • the volatile memory may include, for example, random access memory (RAM) and/or cache memory, or the like.
  • the non-volatile memory may include, for example, read only memory (ROM), hard disk, flash memory, and the like.
  • the input device may be a device used by a user to input instructions, and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
  • the output device can output various information (such as image or sound) to the outside (such as a user), and can include one or more of a display, a speaker, etc., for outputting the processed point cloud as an image or a video, Can also be used to output the obtained saliency map as an image.
  • a communication interface (not shown) is used for communication between the point cloud processing apparatus and other devices, including wired or wireless communication.
  • the point cloud processing device can access wireless networks based on communication standards, such as WiFi, 2G, 3G, 4G, 5G, or a combination thereof.
  • the communication interface receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication interface further includes a Near Field Communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • Processor 802 may be a central processing unit (CPU), graphics processing unit (GPU), application specific integrated circuit (ASIC), field programmable gate array (FPGA), or other form of processing with data processing capabilities and/or instruction execution capabilities unit, and can control other components in the point cloud processing device to perform desired functions.
  • the processor 802 can execute the instructions stored in the memory 801 to execute the point cloud processing method of the embodiments of the present application described herein.
  • a processor can include one or more embedded processors, processor cores, microprocessors, logic circuits, hardware finite state machines (FSMs), digital signal processors (DSPs), or combinations thereof.
  • the processor includes a Field Programmable Gate Array (FPGA), wherein the arithmetic circuit of the point cloud processing apparatus may be a part of the Field Programmable Gate Array (FPGA).
  • FPGA Field Programmable Gate Array
  • the point cloud processing device includes one or more processors, working together or individually, a memory for storing program instructions; the processor for executing the program instructions stored in the memory, when the program instructions are executed , the processor is configured to implement the point cloud processing method according to the embodiment of the present application, including: acquiring point cloud data collected by a ranging device and point cloud density distribution data of the ranging device, the point cloud density distribution data It is related to the scanning mode of the distance measuring device, wherein the point cloud density distribution data includes a plurality of significance coefficients, and the plurality of significance coefficients are used to characterize the distribution characteristics of the point cloud data on the reference plane.
  • the predetermined processing mode includes a first processing mode and a first processing mode
  • the first processing method is used to increase the point cloud density in at least part of the point cloud data
  • the second processing method is used to reduce at least part of the point cloud data.
  • point cloud density in the data is processed.
  • the reference plane is a plane perpendicular to the central axis of the light pulse sequence emitted by the ranging device.
  • the point cloud density distribution data is presented in the form of a heat map (also referred to herein as a saliency map), wherein different pixel values in the heat map are used to characterize different saliency coefficients.
  • the heat map includes a first pixel point and a second pixel point, and the pixel value of the first pixel point is greater than the pixel value of the second pixel point, then the significance coefficient corresponding to the first pixel point is greater than the significance coefficient corresponding to the second pixel point; or, for another example, the heat map includes a first pixel point and a second pixel point, and the pixel value of the first pixel point is greater than that of the second pixel point , the significance coefficient corresponding to the first pixel point is smaller than the significance coefficient corresponding to the second pixel point.
  • the pixel value includes a gray value, a color value or a brightness value, or other values that can characterize the size of the saliency coefficient.
  • acquiring point cloud density distribution data corresponding to the point cloud data collected by the ranging device according to the scanning mode of the ranging device includes: acquiring data corresponding to the scanning mode of the ranging device At least one frame of scanning pattern, wherein the scanning pattern is composed of the mapping points of the point cloud data of the ranging device on the reference plane; according to the number of mapping points on the scanning pattern in different statistical regions, Determine the significance coefficient of the point cloud corresponding to the mapped points in each statistical area.
  • the scanning pattern is obtained by mapping point cloud data output by the ranging device to the reference surface, or the scanning pattern is obtained by fitting a function according to the scanning mode of the ranging device. And the fitting is obtained.
  • determining the significance coefficient corresponding to each statistical region according to the number of the mapping points on the scanning pattern in different statistical regions includes: normalizing the number of mapping points in the plurality of statistical regions To obtain the significance coefficient of the point cloud corresponding to the mapped points in each statistical region.
  • the size of the statistical area is determined based on the size of the target object that the ranging device is intended to detect, wherein the target object includes a first target object and a second target object, and the first target object is larger than the size of the second target object, then the statistical area of the point cloud density distribution data corresponding to the first target object has the first size, and the point cloud density distribution data corresponding to the second target object has the first size.
  • the statistical region has a second size, the first size being larger than the second size.
  • the reference surface has a plurality of statistical regions
  • the point cloud density distribution data includes at least two types of point cloud density distribution data, wherein different types of point cloud density distribution data have different statistical region sizes .
  • acquiring the point cloud density distribution data corresponding to the point cloud data collected by the ranging device according to the scanning mode of the ranging device further comprising: according to the scanning mode and a predetermined method from the point cloud
  • the relationship between the size of the object identified in the data and the size of the statistical area determines the point cloud density distribution data.
  • the point cloud density distribution data is point cloud density distribution data in which the size of the statistical area is smaller than or equal to the size of the target object.
  • the determining a predetermined processing manner of the point cloud data according to the saliency coefficients corresponding to the mapped points of the point cloud data on the reference surface includes: according to the point cloud data The depth information represented by the midpoint cloud and the corresponding significance coefficient determine the spatial distribution attribute of the point cloud in the point cloud data; according to the spatial distribution attribute, determine the predetermined processing method of the point cloud data.
  • determining the predetermined processing manner of the point cloud data according to the spatial distribution attribute includes: when the spatial distribution attribute corresponding to the first part of the point cloud in the point cloud data is within a first threshold range , determine that the predetermined processing mode of the first part of the point cloud is the first processing mode; when the spatial distribution attribute corresponding to the second part of the point cloud in the point cloud data is within the second threshold range, determine that the The predetermined processing manner of the second partial point cloud is the second processing manner.
  • the first processing manner includes one of the following processing manners: interpolation, upsampling, and time accumulation; and the second processing manner includes downsampling.
  • a movable platform 900 is also provided in this embodiment of the present application.
  • the movable platform 900 may include a movable platform body 901 and at least one ranging device 902 .
  • At least one ranging device The device 902 is arranged on the movable platform body 901 and is used to collect point cloud data of the target scene.
  • the distance measuring device 902 reference may be made to the distance measuring device 100 and the distance measuring device 200 in the foregoing, and the description is not repeated here.
  • the distance measuring device 902 can be installed on the movable platform body 901 of the movable platform 900 .
  • the movable platform 900 with the distance measuring device can measure the external environment, for example, measure the distance between the movable platform 900 and obstacles for obstacle avoidance and other purposes, and perform two-dimensional or three-dimensional mapping of the external environment.
  • the movable platform 900 includes at least one of an unmanned aerial vehicle, a vehicle, a remote-controlled vehicle, a robot, and a boat.
  • the ranging device is applied to the unmanned aerial vehicle
  • the movable platform body 901 is the body of the unmanned aerial vehicle.
  • the movable platform body 901 is the body of the automobile.
  • the vehicle may be an autonomous driving vehicle or a semi-autonomous driving vehicle, which is not limited herein.
  • the movable platform body 901 is the body of the remote control car.
  • the movable platform body 901 is a robot.
  • the movable platform 900 further includes the above-mentioned point cloud processing apparatus 800, and the description of the point cloud processing transposition 800 can be referred to the above.
  • both the point cloud processing apparatus 800 and the movable platform 900 have the same method as the aforementioned point cloud processing method. Same advantages.
  • an embodiment of the present application further provides a computer storage medium, on which a computer program is stored.
  • One or more computer program instructions may be stored on the computer-readable storage medium, and the processor may execute the program instructions stored in the memory to implement the functions (implemented by the processor) in the embodiments of the present application described herein and/or other desired functions, such as to perform corresponding steps of the point cloud processing method according to the embodiments of the present application, various application programs and various data may also be stored in the computer-readable storage medium, such as the application Various data used and/or generated by the program, etc.
  • the computer storage medium may include, for example, a memory card for a smartphone, a storage unit for a tablet computer, a hard disk for a personal computer, read only memory (ROM), erasable programmable read only memory (EPROM), portable compact disk Read only memory (CD-ROM), USB memory, or any combination of the above storage media.
  • the computer-readable storage medium can be any combination of one or more computer-readable storage media.
  • a computer-readable storage medium contains computer-readable program codes for converting point cloud data into two-dimensional images, and/or computer-readable program codes for three-dimensional reconstruction of point cloud data, and the like.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or May be integrated into another device, or some features may be omitted, or not implemented.
  • Various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all functions of some modules according to the embodiments of the present application.
  • DSP digital signal processor
  • the present application can also be implemented as a program of apparatus (eg, computer programs and computer program products) for performing part or all of the methods described herein.
  • Such a program implementing the present application may be stored on a computer-readable medium, or may be in the form of one or more signals. Such signals may be downloaded from Internet sites, or provided on carrier signals, or in any other form.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

L'invention concerne un procédé de traitement de nuage de points, un appareil de traitement de nuage de points, une plateforme mobile et un support de stockage informatique. Le procédé de traitement de nuage de points comprend les étapes consistant à : obtenir des données de nuage de points acquises par un dispositif de télémétrie et des données de distribution de densité de nuage de points du dispositif de télémétrie (S501), les données de distribution de densité de nuage de points étant associées au mode de balayage du dispositif de télémétrie, les données de distribution de densité de nuage de points comprenant une pluralité de coefficients de signification, et la pluralité de coefficients de signification étant utilisés pour caractériser des caractéristiques de distribution de points de mappage des données de nuage de points sur une surface de référence ; déterminer un mode de traitement prédéterminé des données de nuage de points selon les coefficients de signification correspondant aux points de mappage des données de nuage de points sur la surface de référence (S502), le mode de traitement prédéterminé comprenant un premier mode de traitement et/ou un second mode de traitement, le premier mode de traitement étant utilisé pour augmenter la densité de nuage de points dans au moins une partie de la zone des données de nuage de points, et le second mode de traitement étant utilisé pour réduire la densité de nuage de points dans au moins une partie de la zone des données de nuage de points ; et traiter les données de nuage de points selon le mode de traitement prédéterminé (S503).
PCT/CN2020/136819 2020-12-16 2020-12-16 Procédé de traitement de nuage de points, appareil de traitement de nuage de points, plateforme mobile, et support de stockage informatique WO2022126427A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/136819 WO2022126427A1 (fr) 2020-12-16 2020-12-16 Procédé de traitement de nuage de points, appareil de traitement de nuage de points, plateforme mobile, et support de stockage informatique
CN202080070978.9A CN114556427A (zh) 2020-12-16 2020-12-16 点云处理方法、点云处理装置、可移动平台和计算机存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/136819 WO2022126427A1 (fr) 2020-12-16 2020-12-16 Procédé de traitement de nuage de points, appareil de traitement de nuage de points, plateforme mobile, et support de stockage informatique

Publications (1)

Publication Number Publication Date
WO2022126427A1 true WO2022126427A1 (fr) 2022-06-23

Family

ID=81668362

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/136819 WO2022126427A1 (fr) 2020-12-16 2020-12-16 Procédé de traitement de nuage de points, appareil de traitement de nuage de points, plateforme mobile, et support de stockage informatique

Country Status (2)

Country Link
CN (1) CN114556427A (fr)
WO (1) WO2022126427A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114937081A (zh) * 2022-07-20 2022-08-23 之江实验室 基于独立非均匀增量采样的网联车位置估计方法及装置
CN115457496A (zh) * 2022-09-09 2022-12-09 北京百度网讯科技有限公司 自动驾驶的挡墙检测方法、装置及车辆
CN115984147A (zh) * 2023-03-17 2023-04-18 汉斯夫(杭州)医学科技有限公司 基于齿科扫描仪点云自适应处理方法、设备及介质
CN116150298A (zh) * 2023-04-19 2023-05-23 山东盛途互联网科技有限公司 基于物联网的数据采集方法、系统及可读存储介质
CN116184358A (zh) * 2023-04-27 2023-05-30 深圳市速腾聚创科技有限公司 激光测距方法、装置和激光雷达
CN116740197A (zh) * 2023-08-11 2023-09-12 之江实验室 一种外参的标定方法、装置、存储介质及电子设备
CN117496134A (zh) * 2024-01-03 2024-02-02 思创数码科技股份有限公司 船舶目标检测方法、系统、可读存储介质及计算机

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160364917A1 (en) * 2015-06-11 2016-12-15 Nokia Technologies Oy Structure Preserved Point Cloud Simplification
CN109948661A (zh) * 2019-02-27 2019-06-28 江苏大学 一种基于多传感器融合的3d车辆检测方法
CN110956700A (zh) * 2019-12-03 2020-04-03 西南科技大学 一种基于运动恢复结构生成点云的密度调控方法
CN111462073A (zh) * 2020-03-30 2020-07-28 国家基础地理信息中心 机载激光雷达点云密度的质量检查方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160364917A1 (en) * 2015-06-11 2016-12-15 Nokia Technologies Oy Structure Preserved Point Cloud Simplification
CN109948661A (zh) * 2019-02-27 2019-06-28 江苏大学 一种基于多传感器融合的3d车辆检测方法
CN110956700A (zh) * 2019-12-03 2020-04-03 西南科技大学 一种基于运动恢复结构生成点云的密度调控方法
CN111462073A (zh) * 2020-03-30 2020-07-28 国家基础地理信息中心 机载激光雷达点云密度的质量检查方法和装置

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114937081B (zh) * 2022-07-20 2022-11-18 之江实验室 基于独立非均匀增量采样的网联车位置估计方法及装置
WO2024016524A1 (fr) * 2022-07-20 2024-01-25 之江实验室 Procédé et appareil d'estimation de position de véhicule connecté basés sur un échantillonnage incrémentiel non uniforme indépendant
CN114937081A (zh) * 2022-07-20 2022-08-23 之江实验室 基于独立非均匀增量采样的网联车位置估计方法及装置
CN115457496A (zh) * 2022-09-09 2022-12-09 北京百度网讯科技有限公司 自动驾驶的挡墙检测方法、装置及车辆
CN115457496B (zh) * 2022-09-09 2023-12-08 北京百度网讯科技有限公司 自动驾驶的挡墙检测方法、装置及车辆
CN115984147B (zh) * 2023-03-17 2023-09-15 汉斯夫(杭州)医学科技有限公司 基于齿科扫描仪点云自适应处理方法、设备及介质
CN115984147A (zh) * 2023-03-17 2023-04-18 汉斯夫(杭州)医学科技有限公司 基于齿科扫描仪点云自适应处理方法、设备及介质
CN116150298A (zh) * 2023-04-19 2023-05-23 山东盛途互联网科技有限公司 基于物联网的数据采集方法、系统及可读存储介质
CN116184358B (zh) * 2023-04-27 2023-08-04 深圳市速腾聚创科技有限公司 激光测距方法、装置和激光雷达
CN116184358A (zh) * 2023-04-27 2023-05-30 深圳市速腾聚创科技有限公司 激光测距方法、装置和激光雷达
CN116740197B (zh) * 2023-08-11 2023-11-21 之江实验室 一种外参的标定方法、装置、存储介质及电子设备
CN116740197A (zh) * 2023-08-11 2023-09-12 之江实验室 一种外参的标定方法、装置、存储介质及电子设备
CN117496134A (zh) * 2024-01-03 2024-02-02 思创数码科技股份有限公司 船舶目标检测方法、系统、可读存储介质及计算机
CN117496134B (zh) * 2024-01-03 2024-03-22 思创数码科技股份有限公司 船舶目标检测方法、系统、可读存储介质及计算机

Also Published As

Publication number Publication date
CN114556427A (zh) 2022-05-27

Similar Documents

Publication Publication Date Title
WO2022126427A1 (fr) Procédé de traitement de nuage de points, appareil de traitement de nuage de points, plateforme mobile, et support de stockage informatique
WO2020243962A1 (fr) Procédé de détection d'objet, dispositif électronique et plateforme mobile
WO2021051281A1 (fr) Procédé de filtrage de bruit en nuage de points, dispositif de mesure de distance, système, support d'informations et plateforme mobile
WO2022198637A1 (fr) Procédé et système de filtrage de bruit en nuage de points et plate-forme mobile
CN114080625A (zh) 绝对位姿确定方法、电子设备及可移动平台
WO2021239054A1 (fr) Appareil, procédé et dispositif de mesure d'espace, et support de stockage lisible par ordinateur
WO2021062581A1 (fr) Procédé et appareil de reconnaissance de marquage routier
WO2020124318A1 (fr) Procédé d'ajustement de la vitesse de déplacement d'élément de balayage, de dispositif de télémétrie et de plateforme mobile
WO2020215252A1 (fr) Procédé de débruitage de nuage de points de dispositif de mesure de distance, dispositif de mesure de distance et plateforme mobile
WO2021232227A1 (fr) Procédé de construction de trame de nuage de points, procédé de détection de cible, appareil de télémétrie, plateforme mobile et support de stockage
US20210333401A1 (en) Distance measuring device, point cloud data application method, sensing system, and movable platform
US20210255289A1 (en) Light detection method, light detection device, and mobile platform
WO2020237663A1 (fr) Procédé d'interpolation de nuage de points lidar multi-canal et appareil de télémétrie
WO2020177076A1 (fr) Procédé et appareil d'étalonnage de l'état initial d'un appareil de détection
WO2020155159A1 (fr) Procédé pour augmenter la densité d'échantillonnage de nuage de points, système de balayage de nuage de points et support d'enregistrement lisible
WO2020133384A1 (fr) Dispositif de télémétrie laser et plateforme mobile
WO2022217520A1 (fr) Procédé et appareil de détection, plate-forme mobile et support de stockage
WO2020155142A1 (fr) Procédé, dispositif et système de rééchantillonnage de nuage de points
US20220082665A1 (en) Ranging apparatus and method for controlling scanning field of view thereof
WO2021232247A1 (fr) Procédé de coloration de nuage de points, système de coloration de nuage de points et support de stockage informatique
WO2020107379A1 (fr) Procédé de correction de réflectivité pour utilisation dans un appareil de télémétrie, et appareil de télémétrie
US20210341588A1 (en) Ranging device and mobile platform
WO2022170535A1 (fr) Procédé de mesure de distance, dispositif de mesure de distance, système et support d'enregistrement lisible par ordinateur
CN114080545A (zh) 数据处理方法、装置、激光雷达和存储介质
WO2021026766A1 (fr) Procédé et dispositif de commande de la vitesse de rotation d'un moteur pour module de balayage, et dispositif de mesure de distance

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20965445

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20965445

Country of ref document: EP

Kind code of ref document: A1