WO2020243874A1 - Procédés et systèmes de codage et de décodage de coordonnées de position de données de nuage de points, et support d'informations - Google Patents

Procédés et systèmes de codage et de décodage de coordonnées de position de données de nuage de points, et support d'informations Download PDF

Info

Publication number
WO2020243874A1
WO2020243874A1 PCT/CN2019/089787 CN2019089787W WO2020243874A1 WO 2020243874 A1 WO2020243874 A1 WO 2020243874A1 CN 2019089787 W CN2019089787 W CN 2019089787W WO 2020243874 A1 WO2020243874 A1 WO 2020243874A1
Authority
WO
WIPO (PCT)
Prior art keywords
side length
point cloud
cloud data
divided block
block
Prior art date
Application number
PCT/CN2019/089787
Other languages
English (en)
Chinese (zh)
Inventor
李璞
郑萧桢
张富
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980008580.XA priority Critical patent/CN111602176A/zh
Priority to PCT/CN2019/089787 priority patent/WO2020243874A1/fr
Publication of WO2020243874A1 publication Critical patent/WO2020243874A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/40Tree coding, e.g. quadtree, octree

Definitions

  • the present invention generally relates to the technical field of point cloud coding, and more particularly relates to a coding and decoding method, system and storage medium for the position coordinates of point cloud data.
  • a point cloud is a form of expression of a three-dimensional object or scene. It is composed of a set of discrete points that are randomly distributed in space and express the spatial structure and surface properties of the three-dimensional object or scene. In order to accurately reflect the information in the space, the number of discrete points required is huge. In order to reduce the bandwidth occupied by point cloud data storage and transmission, the point cloud data needs to be encoded and compressed.
  • the point cloud data coding and compression process includes the coding of position coordinates and the coding of attributes.
  • the location coordinates are usually distributed relatively discretely, and a point cloud data point will correspond to the location coordinates in three directions.
  • Such a set of point cloud data usually has a large amount of data, so it is necessary to check the point cloud data, especially the point cloud.
  • the position coordinates in the data are effectively encoded and compressed.
  • the existing encoding and compression of the position coordinates of point cloud data is usually based on octree partition coding.
  • the side length of the octree partition coding is only reduced by half each time the octree partition is performed. This method may cause many times None of the sub-blocks containing point cloud data are obtained by the division, which increases the redundant calculation in the encoding process to a certain extent, makes the coding efficiency lower, and also limits the compression performance to a certain extent.
  • the present invention is proposed to solve the above-mentioned problems.
  • the present invention provides a coding and decoding scheme for the position coordinates of point cloud data, which combines the value range of the field of view (FOV, field of view) of the point cloud data collection device during the collection process to limit the coding and decoding process of the position coordinates Dividing the value range of the encoding and decoding process in the middle can quickly reduce some areas where point cloud data points must not exist, thereby improving the encoding and decoding efficiency and reducing the time overhead in the encoding and decoding process.
  • a method for encoding position coordinates of point cloud data includes: determining the initial position coordinates of an initial block according to the input position coordinates of the point cloud data; Space division coding obtains an intermediate coding result; and performing arithmetic coding on the intermediate coding result to obtain a final coding result; wherein, in the process of performing spatial division coding on the initial block, based on the data of the collection device that collects the point cloud data The field of view determines whether to adjust the coordinate range of the divided block.
  • a method for decoding the position coordinates of point cloud data comprising: performing arithmetic decoding on the position coordinate encoding result of the point cloud data to obtain the arithmetic decoding result; performing the arithmetic decoding result on the arithmetic decoding result Space division decoding obtains an intermediate decoding result; and performing inverse preprocessing on the intermediate decoding result to obtain the position coordinates of the point cloud data; wherein, in the process of space division decoding, the determination is based on the header information of the encoding result The coordinate range of the initial block used for spatial division, and whether to adjust the coordinate range of the divided block is determined based on the field of view of the collecting device that collects the point cloud data.
  • an encoding system for the position coordinates of point cloud data.
  • the encoding system includes a storage device and a processor, and the storage device stores a computer program run by the processor.
  • the method for encoding the position coordinates of the point cloud data described in any of the above items is executed.
  • a system for decoding position coordinates of point cloud data includes a storage device and a processor, and the storage device stores a computer program run by the processor.
  • the method for decoding the position coordinates of the point cloud data described in any one of the above items is executed.
  • a storage medium with a computer program stored on the storage medium, and the computer program executes the method for encoding the position coordinates of the point cloud data described in any one of the above items during operation.
  • a storage medium with a computer program stored on the storage medium, and the computer program executes the method for decoding the position coordinates of the point cloud data described in any one of the above items during operation.
  • the method, system and storage medium for encoding and decoding the position coordinates of point cloud data combine the value range of the field of view of the point cloud data acquisition device in the acquisition process to limit the division of encoding and decoding during the encoding and decoding process of the position coordinates.
  • the value range of the decoding process can quickly reduce some areas where point cloud data points must not exist, thereby reducing the time overhead in the encoding and decoding process and improving the encoding and decoding efficiency.
  • Fig. 1 shows a schematic flowchart of a method for encoding position coordinates of point cloud data according to an embodiment of the present invention
  • Figure 2 shows a schematic diagram of point cloud data collection according to an embodiment of the present invention
  • FIG. 3 shows a schematic diagram of solving the coordinates of the center position of the viewpoint of a mapping surface where a point is located according to an embodiment of the present invention
  • FIG. 4 shows a schematic diagram of a point cloud distribution range defined according to a field of view of a collection device according to an embodiment of the present invention
  • FIG. 5 shows a schematic diagram of point cloud data collection according to another embodiment of the present invention.
  • Fig. 6 shows a schematic diagram of a point cloud distribution range defined according to a field of view of a collection device according to another embodiment of the present invention
  • Fig. 7 shows a schematic diagram of a tree division coding process according to an embodiment of the present invention.
  • Fig. 8 shows a schematic block diagram of a system for encoding position coordinates of point cloud data according to an embodiment of the present invention
  • FIG. 9 shows a schematic flowchart of a method for decoding position coordinates of point cloud data according to an embodiment of the present invention.
  • Fig. 10 shows a schematic block diagram of a decoding system for position coordinates of point cloud data according to an embodiment of the present invention
  • FIG. 11 shows a schematic block diagram of a distance measuring device capable of collecting point cloud data according to an embodiment of the present invention
  • Fig. 12 shows a schematic block diagram of a distance measuring device capable of collecting point cloud data according to another embodiment of the present invention.
  • FIG. 13 shows a schematic diagram of a scanning pattern of the distance measuring device shown in FIG. 12.
  • the present invention provides a coding and decoding scheme for the position coordinates of point cloud data.
  • the coding and decoding scheme for the position coordinates of point cloud data according to embodiments of the present invention will be described below with reference to the accompanying drawings.
  • Fig. 1 shows a schematic flowchart of a method 100 for encoding position coordinates of point cloud data according to an embodiment of the present invention.
  • the method 100 for encoding position coordinates of point cloud data may include the following steps:
  • step S110 the initial position coordinates of the initial block are determined according to the position coordinates of the input point cloud data.
  • distance measuring devices such as laser scanners and lidars may be used to collect point cloud data for a certain object or a certain scene.
  • the collected point cloud data includes position coordinates in three dimensions.
  • the position coordinates of each point cloud data can be expressed in a Cartesian coordinate system, for example, expressed as (x, y, z).
  • the position coordinates of each point cloud data can also be represented by other coordinate systems, such as a spherical coordinate system, a cylindrical coordinate system, and so on.
  • the Cartesian coordinate system is used as an example to describe various coordinates.
  • the position coordinates of the input point cloud data may be preprocessed first, and the preprocessing may include quantifying the position coordinates of the input point cloud data.
  • the position coordinates of the input point cloud data can be the maximum value in each of the three directions (take the Cartesian coordinate system as an example, the three directions are the x-axis direction, the y-axis direction and the z-axis direction)
  • the difference between the minimum value and the input quantization precision parameter is used to quantize the position coordinates of each input point cloud data to simplify the coding operation of the position coordinates of the point cloud data.
  • the quantization accuracy can be a preset fixed value.
  • the quantization accuracy of the position coordinates of each input point cloud data in all three-dimensional directions can be kept consistent.
  • the position coordinates of the input point cloud data can be converted into integer coordinates greater than or equal to zero by quantization.
  • the preprocessing of the position coordinates of the input point cloud data may also include any other suitable operations, which is not limited in the present invention.
  • the initial position coordinates of the block (which may be referred to as the initial block in this text) that is spatially divided for the first time can be determined.
  • the position coordinates of the input point cloud data can also be directly used as a basis to determine the initial position coordinates of the initial block used for space division coding.
  • the initial position coordinates of the initial block may be determined based on the maximum value among the maximum values of the position coordinates of the preprocessed point cloud data in the three directions.
  • space division may include octree division, quadtree division, and binary tree division, etc. The specific division may depend on the geometry of the divided blocks in the division process.
  • octree division may be adopted first, and then octree division, quadtree division, or binary tree division may be adopted after adjusting the coordinate range of the division block.
  • step S120 the initial block is spatially divided and encoded to obtain an intermediate encoding result, wherein, in the process of spatially dividing and encoding the initial block, it is determined whether to adjust based on the field of view of the collecting device that collects the point cloud data The coordinate range of the divided block.
  • the position coordinates of the preprocessed point cloud data adopt coding based on space division coding, where the space division coding may include octree division coding, quadtree division coding, and binary tree division coding. This may depend on the geometry of the divided blocks during the division process, which will be described later in the following.
  • the side length (or the coordinate range) of the block (which can be called the initial block) that is first divided by the octree is usually based on the position coordinates of the preprocessed point cloud data. Determine, the value of the side length is generally based on the maximum value of the three directions of the position coordinates of the preprocessed point cloud data; the block divided by the octree (can be called the divided block)
  • the side length of is usually half of the side length of the previous layer block (for example, the side length of the initial block is 16, then the side length of the divided block of the first layer is 8, the side length of the divided block of the second layer is 4, etc.) ,
  • the division is carried out in this way until a block with the smallest side length (usually 1) is obtained, and the division ends.
  • This change in side length in the process also means that the coordinate range of the block also conforms to a similar change law.
  • the coordinate range (including side length) of the initial block used for space division is also based on The pre-processed position coordinates are determined, but the coordinate range of each division block used for space division is not simply halved and reduced, but is also determined based on the field of view of the collection device that collects the point cloud data. (Mainly refers to whether to reduce) the coordinate range of each divided block.
  • the coding based on the space division coding of the position coordinates of the point cloud data proposed in some embodiments of the present invention is based on the following idea: the blocks traditionally used for octree division are usually cubes, but they are actually obtained Point cloud data collection equipment (including lidar, laser scanner, etc.) basically emits laser light from the center according to a specific rule and obtains the return value to achieve point cloud data collection. Usually such collection equipment will have a certain field of view Under this limitation, the value range of the position coordinates is not distributed according to the cube. Therefore, the division and coding directly in the cube increases the redundant calculation in the coding process to a certain extent, and also To some extent, it limits the performance of compression.
  • Point cloud data collection equipment including lidar, laser scanner, etc.
  • the coding based on the space division coding for the position coordinates of the point cloud data proposed by the present invention will determine the block used for space division this time based on the field of view of the collection device that collects the point cloud data before each space division. Whether the coordinate range (side length) is too large, whether there must be no point cloud data in a certain range of the block, if so, adjust the coordinate range (side length) of the block based on the field of view of the collecting device that collects the point cloud data, so as It can effectively reduce the side length of part of the block, thereby reducing the total number of divisions. On the one hand, it reduces the bit stream bits used to describe the division.
  • the initialized cube ie, the initial block
  • its value must be the smallest cube that can be selected within the field of view of the collection device, so in the first time
  • the initial block is a cuboid, or a cube, or a part of a sphere, or a part of a cylinder.
  • the initial block is a cube as an example.
  • the side length of the initial block can be an integer power of 2 and be greater than or equal to and closest to the selected value. It should be understood that the selected value can be written into the header information of the code stream file for use by the decoder. Based on the determination of the side length of the initial block, the octree partition coding can be started.
  • the division of the octree uses the coordinates of the center point of the current block to divide the current block into eight small sub-blocks through the center point.
  • the coordinates of the center point of the current cube block are (x mid , y mid , z mid ), assuming that the minimum values in the three directions of the current block are x min , y min , z min , and the current block in the three directions
  • the maximum values are x max , y max , z max
  • the coordinate range of the eight small sub-blocks in the octree division process is as follows: the coordinate range of the first block is x min ⁇ x ⁇ x mid , y min ⁇ y ⁇ y mid , z min ⁇ z ⁇ z mid ; the coordinate range of the second block is x min ⁇ x ⁇ x mid , y min ⁇ y ⁇ y mid , z mid ⁇ z ⁇ z max ; the coordinate range of the third block is x min ⁇ x ⁇ x mid
  • the octree encoding process it is determined in turn which of the eight sub-blocks all the point cloud data points contained in the current block belong to. After the determination of which sub-block all the point cloud data points contained in the block belongs to is finished, Then, 8 bits can be used to encode the sub-block division of the current block. If the current block contains point cloud data points, the corresponding bit will be set to 1, otherwise it will be set to 0. For example, when the third sub-block contains point cloud data points, the sixth sub-block contains point cloud data points, and the other sub-blocks do not contain point cloud data points, the encoded 8-bit binary code stream at this time is 0010 0100. Then the block containing the data points is then divided. In the embodiment of the present invention, before dividing any divided block, it can be determined whether the division needs to be adjusted based on the field of view of the collecting device collecting the point cloud data The coordinate range of the block.
  • determining whether to adjust the coordinate range of the divided block based on the field of view of the collecting device that collects the point cloud data may include the following steps: determining that the corresponding divided block is within the limit of the field of view of the collecting device There is a coordinate range of point cloud data (this coordinate range can be referred to as the visual field limited coordinate range); for at least one of the three-dimensional directions, the value range of the visual field limited coordinate range in this direction is the same as the value range of the divided block The original coordinate range is compared in the value range of the direction; based on the comparison result, it is determined whether to adjust the coordinate range of the divided block.
  • each divided block may correspond to a limited field of view coordinate range
  • the determination of the limited field of view coordinate range may include the following steps: calculating the position coordinates of each vertex of the divided block.
  • the distance from the position coordinates of the center of the viewpoint to the device position coordinates of the collection device determines the coordinate range of the point cloud data corresponding to the divided block under the limitation of the field of view of the collection device; wherein any vertex corresponds to
  • the viewpoint center of the mapping surface refers to the projection point of the vertex onto a preset vector, and the starting point of the preset vector is the device position coordinate obtained by the preprocessing of the position coordinate of the collection device, and the preset The end point of the vector is the reference position coordinate obtained by the preprocessing of the position
  • preprocessing can include quantization operations.
  • the device location coordinates of the collection device may be quantized coordinates of the actual location coordinates of the collection device
  • the reference location coordinates may be the preset reference within the field of view of the collection device
  • the quantized coordinates of the actual position coordinates of the point Exemplarily, the quantization method of the actual position coordinates of the collecting device and the quantization method of the actual position coordinates of the preset reference point may be the same as the quantization method of the position coordinates of the input point cloud data.
  • the quantization precision adopted by the quantization method may be a preset fixed value.
  • the quantization accuracy of the quantization method in all directions in the three-dimensional direction can be kept consistent.
  • the limited coordinate range of the field of view corresponding to any divided block is related to the collection device that collects the point cloud data and its field of view.
  • the following illustrates the method for determining the limited coordinate range of the visual field corresponding to any divided block in conjunction with FIGS. 2 to 6.
  • the following is an example in which the field of view of the point cloud data acquisition device is a cone. It is understandable that the field of view of the point cloud data collection device can have other shapes and is not limited.
  • FIG. 2 shows a schematic diagram of point cloud data collection according to an embodiment of the present invention
  • FIG. 3 shows a schematic diagram of solving the coordinates of the center position of a viewpoint on a mapping surface where a certain point is located according to an embodiment of the present invention
  • point O is the location of the collection device.
  • the rays used to collect point cloud data are all emitted from here. In an unquantized coordinate system, this location is usually the origin of the coordinate.
  • the coordinates in the coordinate system are (x o , y o , z o ), suppose the center of the viewpoint of a reflecting surface is point A as shown in Figure 2, and the corresponding coordinates are (x A , y A , z A ) .
  • the size of the reflecting surface can be obtained according to the field of view (FOV H and FOV V ) of the collecting device in the horizontal and vertical directions (using the length H in the horizontal direction and the vertical The V in the direction indicates), as shown in Equation 1 and Equation 2.
  • x min , y min , and z min in equation 3, equation 4, and equation 5 are the minimum values in the three directions in the position coordinates of all point cloud data points, and scale is the accuracy during quantization.
  • scale is the accuracy during quantization.
  • x, y, z are the coordinate values of the position coordinates to be quantized in three directions, They are the coordinate values of the quantized position coordinates in three directions.
  • point The location can be referred to as the device location coordinates of the collection device, which is not the actual location coordinates of the collection device, but the quantized coordinates of the actual location coordinates.
  • the position coordinates of the viewpoint center of the mapping surface corresponding to each vertex are calculated based on the position coordinates of each vertex of the divided block, wherein the viewpoint center of the mapping surface corresponding to any vertex refers to the vertex to the preset vector
  • the starting point of the preset vector is the device position coordinate obtained by the preprocessing of the position coordinates of the collection device, and the end point of the preset vector is the preset within the field of view of the collection device
  • the position coordinates of the reference point be the reference position coordinates obtained by the preprocessing.
  • the preset reference point is point A
  • the position coordinates of the preset reference point are the reference position coordinates obtained by the preprocessing.
  • the starting point of the preset vector is a point
  • the end point of the preset vector is a point
  • the preset vector is the vector Set point Is the vertex of a block in a certain space division process in the quantized coordinate system
  • the position coordinate is The calculation method of the position coordinates of the viewpoint center of the mapping surface corresponding to the vertex can be as shown in FIG. 3.
  • Figure 3 shows how to solve a certain point according to an embodiment of the present invention
  • a schematic diagram of the coordinates of the center position of the viewpoint on the mapping surface As shown in Figure 3, point Is the point The view point center of the mapping surface where it is located, set from the point Pointing point
  • the vector is Can be obtained The value of (as shown in Equation 6), and then the vector can be obtained
  • the modulus of (as shown in Equation 7) is based on And vector
  • the modulus of can be obtained as a vector (As shown in Equation 8), then you can find the point The position coordinates (as shown in Equation 9).
  • the position coordinates of the viewpoint center of the mapping surface corresponding to the position coordinates of any vertex of any divided block can be obtained, and the position coordinates of the viewpoint center of the mapping surface corresponding to the vertex can be obtained.
  • the distance of the device location coordinates of the collection device Based on the distance from the position coordinates of the viewpoint center of the mapping surface corresponding to each vertex of the divided block to the device position coordinates of the collection device, the field of view (FOV H) corresponding to the divided block can be determined.
  • FOV V there is a coordinate range of point cloud data (that is, the aforementioned limited coordinate range of the field of view corresponding to the divided block), as shown in FIG. 4.
  • Fig. 4 shows a schematic diagram of a point cloud distribution range defined according to a field of view of a collection device according to an embodiment of the present invention.
  • the minimum distance corresponds to the center of the viewpoint
  • the maximum distance corresponds to the center of the viewpoint
  • the point cloud data points contained in the current divided block must fall within the quadrangular prism defined by the minimum distance, the maximum distance, and the field of view of the collection device in FIG. 4.
  • the coordinate range of the quadrangular mesa can be compared with the original coordinate range of the division block currently used for space division to determine whether the coordinate range of the current division block needs to be adjusted.
  • Fig. 5 shows a schematic diagram of point cloud data collection according to another embodiment of the present invention
  • Fig. 6 shows a schematic diagram of a point cloud distribution range defined according to the field of view of a collection device according to another embodiment of the present invention.
  • point O is the location of the acquisition device, and the rays used to collect point cloud data are all emitted from here.
  • this location is usually the origin of the coordinate.
  • the coordinates in the coordinate system are (x o , y o , z o ), suppose the center of the viewpoint of a reflecting surface is point A as shown in Figure 5, and the corresponding coordinates are (x A , y A , z A ) . Supposing the linear distance from point O to the reflecting surface is dist, the size of the reflecting surface can be obtained according to the field of view (FOV) on the collecting device (represented by the diameter D of the reflecting surface), as shown in Equation 11.
  • FOV field of view
  • the position coordinates of point O and point A are quantized in the same way as the quantization of position coordinates.
  • the specific process can be shown in Equation 3, Equation 4, and Equation 5.
  • the corresponding point in the coordinate system after quantization can be determined With points The location coordinates are versus Among them, point The location can be referred to as the device location coordinates of the collection device, which is not the actual location coordinates of the collection device, but the quantized coordinates of the actual location coordinates.
  • the position coordinates of the viewpoint center of the mapping surface corresponding to each vertex are calculated based on the position coordinates of each vertex of the divided block, wherein the viewpoint center of the mapping surface corresponding to any vertex refers to the vertex to the preset vector
  • the starting point of the preset vector is the device position coordinate obtained by the preprocessing of the position coordinates of the collection device, and the end point of the preset vector is the preset within the field of view of the collection device
  • the position coordinates of the reference point be the reference position coordinates obtained by the preprocessing.
  • the preset reference point is point A
  • the position coordinates of the preset reference point are the reference position coordinates obtained by the preprocessing.
  • the starting point of the preset vector is a point
  • the end point of the preset vector is a point
  • the preset vector is the vector Set point Is the vertex of a block in a certain space division process in the quantized coordinate system
  • the position coordinate is The calculation method of the position coordinates of the viewpoint center of the mapping surface corresponding to the vertex can be as shown in FIG. 3.
  • the position coordinates of the viewpoint center of the mapping surface corresponding to the position coordinates of any vertex of any divided block can be calculated, and the position coordinates of the viewpoint center of the mapping surface corresponding to the vertex can be calculated to the The distance of the device location coordinates of the collection device.
  • the field of view (FOV) corresponding to the divided block in the collection device can be determined.
  • FOV field of view
  • the minimum distance corresponds to the center of the viewpoint
  • the maximum distance corresponds to the center of the viewpoint
  • the point cloud data points contained in the current divided block must fall within the truncated cone defined by the minimum distance, the maximum distance, and the field of view of the collection device in FIG. 4.
  • the coordinate range of the truncated cone body can be compared with the original coordinate range of the division block currently used for space division to determine whether the coordinate range of the current division block needs to be adjusted.
  • the above exemplarily shows the method for determining the limited coordinate range of the visual field corresponding to any divided block. It should be understood that this is only exemplary, and according to the different field of view of the collection device, the determination of the limited coordinate range of the field of view corresponding to any divided block may also be other methods.
  • the following example illustrates how to determine whether to adjust the coordinate range of the divided block and how to adjust the coordinate range of the divided block based on the comparison result of the value range of the limited coordinate range of the field of view corresponding to the divided block and the original coordinate range of the divided block.
  • the limited coordinate range of the field of view corresponding to the divided block and the original coordinate range of the divided block both have a value range in the three-dimensional direction. Therefore, the comparison operation can be performed for at least one direction of the three-dimensional direction, and if necessary, the compared direction Adjust the value range of the coordinates.
  • the following takes one direction as an example for specific description, and it should be understood that the operations in the other two directions are also the same.
  • the determining whether to adjust the coordinate range of the divided block based on the result of the comparison may include: for at least one of the three-dimensional directions, when the field of view limited coordinate range is in this direction When the value range of and the value range of the original coordinate range in this direction have an intersection, and the first new side length determined based on the intersection is less than the original side length of the divided block in this direction, The side length of the divided block is adjusted to the first new side length, and the coordinate range of the divided block in the direction is adjusted based on the first new side length and the intersection part.
  • the limited field of view coordinate range corresponding to the divided block and the original coordinate range of the divided block have an intersection in a certain direction, that is, the following two cases: (1) The minimum value of the limited field of view coordinate range in this direction Less than or equal to the minimum value of the original coordinate range in this direction, and the maximum value of the visual field limited coordinate range in this direction is between the minimum and maximum values of the original coordinate range in this direction; (2) the visual field limited coordinate range The minimum value in this direction is between the minimum value and the maximum value of the original coordinate range in this direction, and the maximum value of the visual field limited coordinate range in this direction is greater than or equal to the maximum value of the original coordinate range in this direction.
  • the minimum value of the original coordinate range in this direction is A1
  • the maximum value of the original coordinate range in this direction is B1
  • the minimum value of the visual field limited coordinate range in this direction is A2
  • the visual field limited coordinate range in this direction The maximum value is B2
  • the first case above is A2 ⁇ A1 and A1 ⁇ B2 ⁇ B1
  • the second case above is A1 ⁇ A2 ⁇ B1 and B2 ⁇ B1.
  • the intersection can represent the interval [A1, B2], and the first new side length L1 can be determined based on the difference between the maximum value and the minimum value of the interval (ie, B2-A1).
  • the first new side length L1 may be an integer power of 2 greater than or equal to and closest to the difference.
  • the first new side length L1 can be compared with the original side length L0 (ie B1-A1) of the divided block in this direction.
  • L1 ⁇ L0 the side length of the divided block can be adjusted from L0 to L1, And based on L1 and interval [A1, B2], adjust the coordinate range of the divided block in this direction.
  • the side length of the divided block in this direction is L1
  • the coordinate range in this direction is [A1, B1'].
  • the intersection can represent the interval [A2, B1], and the first new side length L1' can be determined based on the difference between the maximum value and the minimum value of the interval (ie, B1-A2).
  • the first new side length L1' may be an integer power of 2 greater than or equal to and closest to the difference.
  • the first new side length L1' can be compared with the original side length L0 (ie B1-A1) of the divided block in this direction.
  • L1' ⁇ L0 the side length of the divided block can be adjusted from L0 to L1', and adjust the coordinate range of the divided block in this direction based on L1' and the interval [A2, B1].
  • the determining whether to adjust the coordinate range of the divided block based on the result of the comparison may include: for at least one of the three-dimensional directions, when the field of view limited coordinate range is within the When the value range of the direction falls within the value range of the original coordinate range in this direction, and the second new side length determined based on the limited coordinate range of the field of view is less than the original side length of the divided block in this direction, The side length of the divided block is adjusted to the second new side length, and the coordinate range of the divided block in the direction is adjusted based on the second new side length and the limited field of view coordinate range.
  • the value range of the limited field of view coordinate range corresponding to the divided block in a certain direction falls within the value range of the original coordinate range of the divided block in that direction, that is, the value range of the limited field of view coordinate range in this direction.
  • the minimum value is greater than the minimum value of the original coordinate range in this direction, and the maximum value of the visual field limited coordinate range in this direction is smaller than the maximum value of the original coordinate range in this direction.
  • the maximum value of is B2
  • the situation in this embodiment is A2>A1 and B2 ⁇ B1.
  • the second new side length L2 can be determined based on the difference between the maximum value and the minimum value of the visual field limited coordinate range in the direction (ie, B2-A2).
  • the second new side length L2 may be an integer power of 2 greater than or equal to and closest to the difference.
  • the second new side length L2 can be compared with the original side length L0 (ie B1-A1) of the divided block in this direction.
  • L2 ⁇ L0 the side length of the divided block can be adjusted from L0 to L2, And adjust the coordinate range of the divided block in this direction based on the value range [A2, B2] of the limited coordinate range in this direction based on L2 and the field of view.
  • the minimum value A2 of the limited field of view coordinate range in the direction can be added to the second new side length L2 to obtain the new maximum value B1' of the divided block in the direction, and the minimum value of the limited field of view coordinate range in the direction can be obtained.
  • the maximum value B2 of the visual field limited coordinate range in the direction can be subtracted from the second new side length L2 to obtain the new minimum value A1' of the divided block in this direction, and the visual field limited coordinate range in this direction
  • the value range of the divided block in that direction falls within the value range of the visual field limited coordinate range in this direction, or the value of the divided block in this direction
  • the range is completely overlapped with the value range of the visual field limited coordinate range in this direction, that is, the minimum value of the visual field limited coordinate range in this direction is less than or equal to the minimum value of the divided block in this direction, and the visual field limited coordinate range is in this direction
  • the maximum value on is greater than or equal to the maximum value of the divided block in this direction (using the symbols in the above example is A2 ⁇ A1 and B2 ⁇ B1), it means that the divided block completely falls into the field of view of the acquisition device in this direction Within the range, there is no need to adjust the coordinate range of the divided block in this direction.
  • the value range of the divided block in that direction and the value range of the visual field limited coordinate range in this direction have no intersection at all, that is, the visual field limited coordinate range is in this direction.
  • the maximum value in the direction is smaller than the minimum value of the divided block in this direction, and the minimum value of the limited field of view coordinate range in this direction is greater than the maximum value of the divided block in this direction (using the symbol in the above example is B2 ⁇ A1 and A2>B1), it means that the divided block completely falls outside the field of view of the collecting device in this direction, that is, there is no point cloud data point in the divided block.
  • the segmentation coding is continued for the blocks with point cloud data points, this situation will not occur in practice.
  • the space division coding After adjusting the block size in the block division coding process according to the field of view of the acquisition device, the space division coding will continue.
  • sub-block division is performed layer by layer, and the division of each block is coded one by one.
  • each layer can be divided according to the median method.
  • the octree partition coding is adopted. The division of each layer of the octree uses the coordinates of the center point of the current block to divide the current block into eight small sub-blocks through the center point.
  • the coordinate value range of the eight small sub-blocks in the octree division process is as follows: the first block coordinate value The range is x ⁇ x mid , y ⁇ y mid , z ⁇ z mid ; the coordinate value range of the second block is x ⁇ x mid , y ⁇ y mid , z>z mid ; the coordinate value of the third block is taken The range is x ⁇ x mid , y>y mid , z ⁇ z mid ; the coordinate value range of the fourth block is x ⁇ x mid , y>y mid , z>z mid ; the fifth block coordinate value The range is x>x mid , y ⁇ y mid , and z ⁇ z mid ; the coordinate range of the sixth block is x>x mid , y ⁇ y mid , z>z mid ; the coordinate value of the seventh block The range is x>
  • the octree encoding process it is determined in turn which of the eight sub-blocks all the point cloud data points contained in the current block belong to. After the determination of which sub-block all the point cloud data points contained in the block belongs to is finished, Next, 8 bits will be used to encode the sub-block division of the current block. If the current block contains point cloud data points, the corresponding value will be set to 1, otherwise it will be set to 0. For example, when the third sub-block contains point cloud data points, the sixth sub-block contains point cloud data points, and the other sub-blocks do not contain point cloud data points, the encoded 8-bit binary code stream at this time is 0010 0100.
  • the side length of the divided block in one or two of the three-dimensional directions reaches the preset minimum side length ( Generally, when the value is 1), the direction that reaches the preset minimum side length is no longer divided but still encoded.
  • the side length in one direction reaches the preset minimum side length first, then in the next division process, no division is performed in this direction, and the coordinate values in this direction can be selected
  • an 8-bit pair may be used for the remaining two directions that have not reached the preset minimum side length.
  • the tree division result is coded, wherein the value of the highest four bits of the eight bits depends on whether the sub-blocks obtained by dividing the divided block in the remaining two directions contain point cloud data, and the eight bits The remaining four bits of the value are 0.
  • the following is an example when the x-axis reaches the preset minimum side length first.
  • the two directions will not be divided in the next division process, and the coordinate values in the two directions can be selected to be less than or equal to the middle.
  • the half of the value Exemplarily, when the side length of the divided block in two directions in the three-dimensional direction reaches a preset minimum side length, an eight-bit pair is used for the remaining one direction octree that has not reached the preset minimum side length
  • the division result is coded, wherein the value of the highest two bits of the eight bits depends on whether the sub-block obtained by dividing the divided block in the remaining one direction contains point cloud data, and the remaining eight bits
  • the six-bit value is 0.
  • the two possible corresponding 8bit numbers describing the octree division are xx00 0000, where x needs to be determined as 0 or 1 according to whether the first two sub-blocks contain point cloud data points. If the edges in the three directions all reach the reserved minimum side length, the tree division structure coding process ends.
  • the tree division becomes the quadtree division.
  • a four-bit pair may be used for the remaining two directions that have not reached the preset minimum side length.
  • the tree division result is coded, wherein the value of the four bits depends on whether the sub-blocks obtained by dividing the division block in the remaining two directions contain point cloud data. The following is an example of when the x-axis reaches the minimum side length first.
  • the division center is (y mid , z mid ).
  • the division process has the following four possibilities, which are possible one: y ⁇ y mid , z ⁇ z mid , possible two: y ⁇ y mid , z>z mid , possible three: y>y mid , z ⁇ z mid , possible four: y>y mid , z>z mid .
  • only 4 bits are needed to describe the current division. For example, when the first block contains point cloud data points, the fourth block contains point cloud data points, and none of the other sub-blocks contain point cloud data points, the encoded 4-bit binary code stream at this time is 1001.
  • the tree partition becomes a binary tree partition at this time.
  • the binary tree division result of the remaining one direction that has not reached the preset minimum side length is sampled by the two-bit pair Encoding is performed, wherein the value of the two bits depends on whether the sub-block obtained by dividing the divided block in the remaining one direction contains point cloud data.
  • the division center is (z mid ).
  • the division process has the following two possibilities, one of which is possible: z ⁇ z mid , and possibly two: z>z mid .
  • only 2 bits are needed to describe the current division. For example, when the first block contains point cloud data points and the second block does not contain point cloud data points, the encoded 2bit binary code stream is 10. If the three edges reach the reserved minimum edge length, the tree partition structure coding process ends.
  • the above exemplarily describes the tree division coding process after adjusting the coordinate range of the division block based on the field of view of the collecting device collecting the point cloud data.
  • Those skilled in the art should understand that the above description is only exemplary, and any other suitable manner may also be used to implement the tree division coding process on the division blocks of the adjusted coordinate range.
  • Fig. 7 shows a schematic diagram of a tree division coding process according to an embodiment of the present invention.
  • the point cloud data contained in the sub-block is The number is encoded.
  • the number of point cloud points contained in the block is coded immediately.
  • the block contains a point cloud data point
  • the block contains more than one point cloud data point
  • n point cloud data points suppose that the block contains n point cloud data points. At this time, a bit of 1 will be encoded first, and then the value (n-1) will be encoded. According to the above procedure, the coding process of the number of point cloud points contained in the block can be realized.
  • the number of point cloud data contained in each sub-block is sequentially performed. coding.
  • the side length of a block in the three directions has reached the preset minimum side length, it will be judged whether the current division process has all ended. If there are still blocks that need to be divided, the current block will be directly passed to the next layer until the entire division process has ended.
  • the side lengths of all blocks in the three directions have reached the preset minimum side length, then the number of point cloud points contained in each block is sequentially encoded. For each block, when the block contains a point cloud data point, directly encode a bit of 0 for representation.
  • the block contains more than one point cloud data point
  • the block contains n point cloud data points (where n is a natural number greater than 1).
  • n is a natural number greater than 1
  • a bit of 1 will be encoded first, and then the value will be encoded (n-1).
  • the coding process of the number of point cloud points contained in the block can be realized.
  • step S120 of the method 100 for encoding position coordinates of point cloud data The above exemplarily describes the process of step S120 of the method 100 for encoding position coordinates of point cloud data according to an embodiment of the present invention.
  • the subsequent steps of the encoding method 100 are described below with reference to FIG. 1.
  • step S130 arithmetic coding is performed on the intermediate coding result to obtain a final coding result.
  • the intermediate encoding result is the encoding result obtained by performing the octree division encoding described in step S120 on the preprocessed position coordinates, in order to distinguish it from the final encoding result of the position coordinates of the point cloud data , Call it the intermediate encoding result.
  • the intermediate coding result (binary code stream) can be sent to the arithmetic coding engine for arithmetic coding, and the final coding result of the position coordinates of the point cloud data can be obtained.
  • the method for encoding the position coordinates of point cloud data combines the value range of the field of view of the point cloud data acquisition device during the acquisition process to limit the division of the encoding process in the encoding process of the position coordinates
  • the value range can quickly reduce some areas where point cloud data points must not exist, thereby reducing the time overhead in the encoding process and improving the encoding efficiency.
  • the above exemplarily describes the encoding method of the position coordinates of the point cloud data according to the embodiments of the present invention.
  • the method for encoding position coordinates of point cloud data according to an embodiment of the present invention may be implemented in a device, device, or system having a memory and a processor.
  • the method for encoding position coordinates of point cloud data according to an embodiment of the present invention may further include writing parameters related to the field of view of the collection device into a code stream (for example, the header information of the code stream) to use Used on the decoder side.
  • a code stream for example, the header information of the code stream
  • this step may not be included. Instead, parameters related to the field of view of the acquisition device are set in advance on the decoding end to execute the position of the point cloud data according to the embodiments of the present invention that will be described later. Coordinate decoding method.
  • FIG. 8 shows a schematic block diagram of a system 800 for encoding position coordinates of point cloud data according to an embodiment of the present invention.
  • the system 800 for encoding the position coordinates of point cloud data includes a storage device 810 and a processor 820.
  • the storage device 810 stores a program for implementing corresponding steps in the method for encoding position coordinates of point cloud data according to an embodiment of the present invention.
  • the processor 820 is configured to run a program stored in the storage device 810 to execute the corresponding steps of the method for encoding position coordinates of point cloud data according to the embodiment of the present invention described above. For brevity, I won't repeat them here.
  • a storage medium on which program instructions are stored, and the program instructions are used to execute the point cloud data in the embodiments of the present invention when the program instructions are run by a computer or a processor.
  • the corresponding steps of the encoding method of the position coordinates may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a portable compact disk read-only memory (CD-ROM), USB memory, or any combination of the above storage media.
  • the computer-readable storage medium may be any combination of one or more computer-readable storage media.
  • a method for decoding the position coordinates of point cloud data is also provided. Since the method for decoding the position coordinates of the point cloud data according to the embodiment of the present invention corresponds to the method for encoding the position coordinates of the point cloud data according to the embodiment of the present invention, for simplicity, some of the decoding process is similar to the encoding process. Or the same process will not be repeated in detail.
  • FIG. 9 shows a schematic flowchart of a method 900 for decoding position coordinates of point cloud data according to an embodiment of the present invention.
  • the method 900 for decoding position coordinates of point cloud data may include the following steps:
  • step S910 arithmetic decoding is performed on the point cloud data position coordinate coding result to obtain the arithmetic decoding result.
  • step S910 corresponds to step S130 of the method 100 for encoding the position coordinates of point cloud data according to the embodiment, in which the point cloud data position coordinates encoding result is the inverse process of arithmetic encoding , That is, arithmetic decoding, get the arithmetic decoding result.
  • step S920 space division decoding is performed on the arithmetic decoding result to obtain an intermediate decoding result, wherein in the process of space division decoding, the coordinate range of the initial block used for space division is determined, and based on collecting the point cloud
  • the field of view of the data collection device determines whether to adjust the coordinate range of the divided blocks.
  • the parameters related to the field of view of the acquisition device can be decoded from the code stream to determine the coordinate range of the initial block used for space division. Further, the parameters related to the field of view of the collection device are obtained by decoding from the header information of the code stream.
  • step S920 corresponds to step S120 of the method 100 for encoding the position coordinates of point cloud data according to the embodiment.
  • the arithmetic decoding result obtained in step S920 is decoded based on space division. Decoding.
  • the header information of the encoding result can be decoded to obtain the maximum value of the maximum value of the position coordinates of the point cloud data in each of the three directions, and then the coordinate range of the initial block used for space division is determined based on this value ( Side length). Based on the determination of the side length of the initial block, the space division decoding can be started.
  • the initialized cube ie, the initial block
  • its value must be the smallest cube that can be selected within the field of view of the collection device. Therefore, there is no need to use the field of view of the acquisition device to perform pruning operations during the first division. Therefore, it is only determined for the divided blocks whether the coordinate range of the collection device needs to be adjusted based on the field of view.
  • the coordinates of the center point of the current cuboid block are (x mid , y mid , z mid )
  • the minimum values in the three directions of the current block are x min , y min , z min
  • the maximum values of the current block in the three directions The values are x max , y max , z max
  • the coordinate value ranges of the eight small sub-blocks in the octree division process are as follows.
  • the coordinate range of the first block is x min ⁇ x ⁇ x mid , y min ⁇ y ⁇ y mid , z min ⁇ z ⁇ z mid ;
  • the coordinate range of the second block is x min ⁇ x ⁇ x mid , y min ⁇ y ⁇ y mid , z mid ⁇ z ⁇ z max ;
  • the coordinate range of the third block is x min ⁇ x ⁇ x mid , y mid ⁇ y ⁇ y max , z min ⁇ z ⁇ z mid ;
  • the coordinate range of the fourth block is x min ⁇ x ⁇ x mid , y mid ⁇ y ⁇ y max , z mid ⁇ z ⁇ z max ;
  • the coordinate range of the fifth block is x mid ⁇ x ⁇ x max , y min ⁇ y ⁇ y mid , z min ⁇ z ⁇ z mid ;
  • the coordinate range of the sixth block is x mid ⁇ x ⁇ x max , y min ⁇ y ⁇ y mid , z mid ⁇ z ⁇
  • the divided block to be spatially divided and decoded needs to be determined whether the coordinate range needs to be adjusted based on the field of view of the acquisition device, and whether to adjust and how to adjust with reference to the encoding method 100 described in FIGS. 1 to 6 Similarly, for the sake of brevity, I will not repeat them here.
  • the space division decoding After adjusting the block size in the block division decoding process according to the field of view of the acquisition device, the space division decoding will continue.
  • the division and reconstruction are performed layer by layer, and the division of each block is decoded one by one.
  • two exemplary decoding methods are provided.
  • 8 bits are decoded each time to obtain whether the 8 sub-blocks of the divided block contain point cloud data points, and The sub-block containing the point cloud data points in the sub-block is divided and decoded in the next layer.
  • the division and decoding process ends.
  • the side length of the divided block in one of the three-dimensional directions reaches a preset minimum side length
  • eight bits may be obtained according to the arithmetic decoding result, and the lowest four bits of the eight bits The value of is 0, and according to the highest four bits of the eight bits, it is determined whether the sub-blocks obtained by dividing the divided block in the remaining two directions that have not reached the preset minimum side length contain point cloud data.
  • the side length of the divided block in one of the three-dimensional directions reaches a preset minimum side length
  • four bits may be obtained according to the arithmetic decoding result, and the division may be determined according to the four bits Whether the sub-blocks obtained by dividing the block in the remaining two directions that have not reached the preset minimum side length contain point cloud data.
  • the side length of the divided block in two directions in the three-dimensional direction reaches the preset minimum side length
  • two bits are obtained according to the arithmetic decoding result, and it is determined according to the two bits that the divided block has not reached Whether the sub-blocks divided in the remaining one direction of the preset minimum side length contain point cloud data.
  • the number of point cloud data contained in the sub-block is decoded.
  • the side length of a block in the three directions has reached the preset minimum side length
  • the number of point cloud points contained in the block is then decoded.
  • a 0 is decoded
  • the block contains only one point cloud data point.
  • a 1 is decoded
  • the block contains more than one point cloud data point, and then the value (n-1) is decoded, which means that the block contains n point cloud data points (where n is a natural number greater than 1). Then the division and decoding will continue until the entire tree division structure is reconstructed and the number of point cloud points contained in all blocks is decoded.
  • the number of point cloud data contained in each sub-block is sequentially decoded.
  • the side lengths of a block in the three directions have reached the preset minimum side length, it will be judged whether the current division process has all ended. If there are still blocks to be divided, the current block will be directly Pass to the next level until all the division process has ended.
  • the side lengths of all blocks in the three directions have reached the preset minimum side length, then the number of point cloud points contained in each block will be sequentially decoded.
  • any block containing point cloud points when a 0 is decoded, the block contains only one point cloud data point; when a 1 is decoded, the block contains more than one point cloud data point, and then The value (n-1) will be decoded, indicating that the block contains n point cloud data points.
  • the number of point cloud points contained in each block is sequentially decoded, and the intermediate decoding result of the point cloud data position coordinate coding result is obtained.
  • the intermediate decoding result is the decoding result obtained by performing the spatial division decoding described in step S920 on the arithmetic decoding result.
  • Is the intermediate decoding result In order to distinguish it from the final decoding result of the point cloud data position coordinate encoding result, it is called Is the intermediate decoding result.
  • step S930 inverse preprocessing is performed on the intermediate decoding result to obtain the position coordinates of the point cloud data.
  • step S930 corresponds to step S110 of the method 100 for encoding the position coordinates of point cloud data according to the embodiment.
  • the intermediate decoding result obtained in step S920 is inversely preprocessed (
  • the inverse preprocessing may be inverse quantization, that is, the inverse process of the quantization process described above), so as to obtain the final decoding result, that is, the position coordinates of the point cloud data.
  • the method for decoding the position coordinates of point cloud data combines the value range of the field of view of the point cloud data acquisition device in the acquisition process to limit the division of the decoding process in the decoding process of the position coordinates
  • the value range can quickly reduce some areas where point cloud data points must not exist, thereby reducing the time overhead in the decoding process and improving the decoding efficiency.
  • the above exemplarily describes the method for decoding the position coordinates of the point cloud data according to the embodiments of the present invention.
  • the method for decoding the position coordinates of point cloud data according to an embodiment of the present invention may be implemented in a device, device, or system having a memory and a processor.
  • FIG. 10 shows a schematic block diagram of a system 1000 for decoding position coordinates of point cloud data according to an embodiment of the present invention.
  • the system 1000 for decoding the position coordinates of point cloud data includes a storage device 1010 and a processor 1020.
  • the storage device 1010 stores a program for implementing corresponding steps in the method for decoding position coordinates of point cloud data according to an embodiment of the present invention.
  • the processor 1020 is configured to run a program stored in the storage device 1010 to execute the corresponding steps of the method for decoding the position coordinates of the point cloud data according to the embodiment of the present invention described above. For brevity, I won't repeat them here.
  • a storage medium on which program instructions are stored, and the program instructions are used to execute the point cloud data in the embodiments of the present invention when the program instructions are run by a computer or a processor.
  • the corresponding steps of the decoding method of the position coordinates may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a portable compact disk read-only memory (CD-ROM), USB memory, or any combination of the above storage media.
  • the computer-readable storage medium may be any combination of one or more computer-readable storage media.
  • the above-mentioned collecting device for collecting point cloud data may be a distance measuring device such as a lidar or a laser distance measuring device.
  • the distance measuring device is used to sense external environmental information, for example, distance information, orientation information, reflection intensity information, speed information, etc. of environmental targets.
  • One point cloud point may include at least one of the external environment information measured by the distance measuring device.
  • the distance measuring device can detect the distance from the probe to the distance measuring device by measuring the time of light propagation between the distance measuring device and the probe, that is, the time-of-flight (TOF).
  • the ranging device can also detect the distance from the detected object to the ranging device through other technologies, such as a ranging method based on phase shift measurement, or a ranging method based on frequency shift measurement. There is no restriction.
  • the distance measuring device 1100 may include a transmitting circuit 1110, a receiving circuit 1120, a sampling circuit 1130, and an arithmetic circuit 1140.
  • the transmitting circuit 1110 may emit a light pulse sequence (for example, a laser pulse sequence).
  • the receiving circuit 1120 can receive the light pulse sequence reflected by the object to be detected, and perform photoelectric conversion on the light pulse sequence to obtain an electrical signal. After processing the electrical signal, it can be output to the sampling circuit 1130.
  • the sampling circuit 1130 may sample the electrical signal to obtain the sampling result.
  • the arithmetic circuit 1140 may determine the distance between the distance measuring device 1100 and the detected object based on the sampling result of the sampling circuit 1130.
  • the distance measuring device 1100 may further include a control circuit 1150, which can control other circuits, for example, can control the working time of each circuit and/or set parameters for each circuit.
  • a control circuit 1150 can control other circuits, for example, can control the working time of each circuit and/or set parameters for each circuit.
  • the distance measuring device shown in FIG. 11 includes a transmitting circuit, a receiving circuit, a sampling circuit, and an arithmetic circuit for emitting a beam for detection
  • the embodiment of the present application is not limited to this, the transmitting circuit
  • the number of any one of the receiving circuit, the sampling circuit, and the arithmetic circuit can also be at least two, which are used to emit at least two light beams in the same direction or in different directions; wherein, the at least two light paths can be simultaneous Shooting can also be shooting at different times.
  • the light-emitting chips in the at least two transmitting circuits are packaged in the same module.
  • each emitting circuit includes a laser emitting chip, and the dies in the laser emitting chips in the at least two emitting circuits are packaged together and housed in the same packaging space.
  • the distance measuring device 1100 may further include a scanning module (not shown in FIG. 11) for changing the propagation direction of at least one laser pulse sequence emitted by the transmitting circuit.
  • a module including a transmitting circuit 1110, a receiving circuit 1120, a sampling circuit 1130, and a calculation circuit 1140 or a module including a transmitting circuit 1110, a receiving circuit 1120, a sampling circuit 1130, a calculation circuit 1140, and a control circuit 1150 can be called a measurement Distance module, the distance measurement module can be independent of other modules, for example, scanning module.
  • a coaxial optical path can be used in the distance measuring device, that is, the light beam emitted from the distance measuring device and the reflected light beam share at least part of the optical path in the distance measuring device.
  • the distance measuring device may also adopt an off-axis optical path, that is, the light beam emitted by the distance measuring device and the reflected light beam are respectively transmitted along different optical paths in the distance measuring device.
  • FIG. 12 shows a schematic diagram of an embodiment in which the distance measuring device of the present invention adopts a coaxial optical path.
  • the ranging device 1200 includes a ranging module 1201.
  • the ranging module 1201 includes a transmitter 1203 (which may include the above-mentioned transmitting circuit), a collimating element 1204, a detector 1205 (which may include the above-mentioned receiving circuit, sampling circuit, and arithmetic circuit) and Light path changing element 1206.
  • the ranging module 1201 is used to emit a light beam, receive the return light, and convert the return light into an electrical signal.
  • the transmitter 1203 can be used to transmit a light pulse sequence.
  • the transmitter 1203 may emit a sequence of laser pulses.
  • the laser beam emitted by the transmitter 1203 is a narrow-bandwidth beam with a wavelength outside the visible light range.
  • the collimating element 1204 is arranged on the exit light path of the emitter, and is used to collimate the light beam emitted from the emitter 1203, and collimate the light beam emitted from the emitter 1203 into parallel light and output to the scanning module.
  • the collimating element is also used to condense at least a part of the return light reflected by the probe.
  • the collimating element 1204 may be a collimating lens or other elements capable of collimating light beams.
  • the transmitting light path and the receiving light path in the distance measuring device are combined before the collimating element 1204 through the light path changing element 1206, so that the transmitting light path and the receiving light path can share the same collimating element, so that the light path More compact.
  • the transmitter 1203 and the detector 1205 may use their respective collimating elements, and the optical path changing element 1206 is arranged on the optical path behind the collimating element.
  • the light path changing element can use a small area mirror to The transmitting light path and the receiving light path are combined.
  • the light path changing element may also use a reflector with a through hole, where the through hole is used to transmit the emitted light of the emitter 1203, and the reflector is used to reflect the returned light to the detector 1205. In this way, the shielding of the back light by the bracket of the small mirror in the case of using the small mirror can be reduced.
  • the distance measuring device 1200 further includes a scanning module 1202.
  • the scanning module 1202 is placed on the exit light path of the distance measuring module 1201.
  • the scanning module 1202 is used to change the transmission direction of the collimated beam 1219 emitted by the collimating element 1204 and project it to the external environment, and project the returned light to the collimating element 1204 .
  • the returned light is collected on the detector 1205 via the collimating element 1204.
  • the scanning module 1202 may include at least one optical element for changing the propagation path of the light beam, wherein the optical element may change the propagation path of the light beam by reflecting, refracting, or diffracting the light beam.
  • the scanning module 1202 includes a lens, a mirror, a prism, a galvanometer, a grating, a liquid crystal, an optical phased array (Optical Phased Array), or any combination of the foregoing optical elements.
  • at least part of the optical elements are moving.
  • a driving module is used to drive the at least part of the optical elements to move.
  • the moving optical elements can reflect, refract, or diffract the light beam to different directions at different times.
  • the multiple optical elements of the scanning module 1202 can rotate or vibrate around a common axis 1209, and each rotating or vibrating optical element is used to continuously change the propagation direction of the incident light beam.
  • the multiple optical elements of the scanning module 1202 may rotate at different speeds or vibrate at different speeds.
  • at least part of the optical elements of the scanning module 1202 may rotate at substantially the same speed.
  • the multiple optical elements of the scanning module may also be rotated around different axes.
  • the multiple optical elements of the scanning module may also rotate in the same direction or in different directions; or vibrate in the same direction, or vibrate in different directions, which is not limited herein.
  • the scanning module 1202 includes a first optical element 1214 and a driver 1216 connected to the first optical element 1214.
  • the driver 1216 is used to drive the first optical element 1214 to rotate around the rotation axis 1209 to change the first optical element 1214.
  • the first optical element 1214 projects the collimated beam 1219 to different directions.
  • the angle between the direction of the collimated beam 1219 changed by the first optical element and the rotation axis 1209 changes as the first optical element 1214 rotates.
  • the first optical element 1214 includes a pair of opposing non-parallel surfaces through which the collimated light beam 1219 passes.
  • the first optical element 1214 includes a prism whose thickness varies in at least one radial direction.
  • the first optical element 1214 includes a wedge-angle prism to collimate the beam 1219 for refracting.
  • the scanning module 1202 further includes a second optical element 1215, the second optical element 1215 rotates around the rotation axis 1209, and the rotation speed of the second optical element 1215 is different from the rotation speed of the first optical element 1214.
  • the second optical element 1215 is used to change the direction of the light beam projected by the first optical element 1214.
  • the second optical element 1215 is connected to another driver 1217, and the driver 1217 drives the second optical element 1215 to rotate.
  • the first optical element 1214 and the second optical element 1215 can be driven by the same or different drivers, so that the rotation speed and/or rotation of the first optical element 1214 and the second optical element 1215 are different, so as to project the collimated light beam 1219 to the outside space.
  • the controller 1218 controls the drivers 1216 and 1217 to drive the first optical element 1214 and the second optical element 1215, respectively.
  • the rotation speed of the first optical element 1214 and the second optical element 1215 can be determined according to the area and pattern expected to be scanned in actual applications.
  • the drivers 1216 and 1217 may include motors or other drivers.
  • the second optical element 1215 includes a pair of opposite non-parallel surfaces through which the light beam passes. In one embodiment, the second optical element 1215 includes a prism whose thickness varies in at least one radial direction. In one embodiment, the second optical element 1215 includes a wedge prism.
  • the scanning module 1202 further includes a third optical element (not shown) and a driver for driving the third optical element to move.
  • the third optical element includes a pair of opposite non-parallel surfaces, and the light beam passes through the pair of surfaces.
  • the third optical element includes a prism whose thickness varies in at least one radial direction.
  • the third optical element includes a wedge prism. At least two of the first, second, and third optical elements rotate at different rotation speeds and/or rotation directions.
  • each optical element in the scanning module 1202 can project light to different directions, such as directions 1211 and 1213, so that the space around the distance measuring device 1200 is scanned.
  • FIG. 13 is a schematic diagram of a scanning pattern of the distance measuring device 1200. It is understandable that when the speed of the optical element in the scanning module changes, the scanning pattern will also change accordingly.
  • the detection object 1210 When the light 1211 projected by the scanning module 1202 hits the detection object 1210, a part of the light is reflected by the detection object 1210 to the distance measuring device 1200 in a direction opposite to the projected light 1211. The return light 1212 reflected by the detection object 1210 passes through the scanning module 1202 and then enters the collimating element 1204.
  • the detector 1205 and the transmitter 1203 are placed on the same side of the collimating element 1204, and the detector 1205 is used to convert at least part of the return light passing through the collimating element 1204 into an electrical signal.
  • the distance and orientation detected by the distance measuring device 1200 can be used for remote sensing, obstacle avoidance, surveying and mapping, modeling, navigation, etc.
  • the distance measuring device of the embodiment of the present invention can be applied to a mobile platform, and the distance measuring device can be installed on the platform body of the mobile platform.
  • a mobile platform with a distance measuring device can measure the external environment, for example, measuring the distance between the mobile platform and obstacles for obstacle avoidance and other purposes, and for two-dimensional or three-dimensional mapping of the external environment.
  • the mobile platform includes at least one of an unmanned aerial vehicle, a car, a remote control car, a robot, and a camera.
  • the ranging device is applied to an unmanned aerial vehicle, the platform body is the fuselage of the unmanned aerial vehicle.
  • the platform body When the distance measuring device is applied to a car, the platform body is the body of the car.
  • the car can be a self-driving car or a semi-automatic driving car, and there is no restriction here.
  • the platform body When the distance measuring device is applied to a remote control car, the platform body is the body of the remote control car.
  • the platform body When the distance measuring device is applied to a robot, the platform body is a robot.
  • the distance measuring device When the distance measuring device is applied to a camera, the platform body is the camera itself.
  • the foregoing exemplarily describes the coding and decoding method, system, storage medium, and collection device for collecting point cloud data according to the embodiment of the present invention for the position coordinates of the point cloud data.
  • the method, system and storage medium for encoding and decoding the position coordinates of point cloud data according to the embodiments of the present invention are combined with the value range of the field of view of the point cloud data acquisition device during the acquisition process to limit the encoding and decoding of position coordinates.
  • dividing the value range of the encoding and decoding process can quickly reduce some areas where point cloud data points must not exist, thereby reducing the time overhead in the encoding and decoding process and improving the encoding and decoding efficiency.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another device, or some features can be ignored or not implemented.
  • the various component embodiments of the present invention may be implemented by hardware, or by software modules running on one or more processors, or by their combination.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some modules according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as a device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for realizing the present invention may be stored on a computer-readable medium, or may have the form of one or more signals. Such signals can be downloaded from Internet websites, or provided on carrier signals, or provided in any other form.

Abstract

La présente invention concerne des procédés et des systèmes de codage et de décodage de coordonnées de position de données de nuage de points, ainsi qu'un support d'informations. Le procédé de codage des coordonnées de position des données de nuage de points comprend : la détermination des coordonnées de position initiale d'un bloc initial en fonction des coordonnées de position des données de nuage de points entrées ; la réalisation d'un partitionnement spatial et le codage sur le bloc initial pour obtenir un résultat de codage intermédiaire ; et la réalisation d'un codage arithmétique sur le résultat de codage intermédiaire pour obtenir un résultat de codage final. Dans le procédé de réalisation d'un partitionnement spatial et d'un codage sur le bloc initial, sur la base de la plage du champ de vision d'un dispositif d'acquisition qui acquiert les données de nuage de points, il est déterminé s'il faut ajuster la plage de coordonnées d'un bloc de partitionnement. En définissant les plages de valeurs des processus de partitionnement, de codage et de décodage dans les processus de codage et de décodage de coordonnées de position selon les procédés et les systèmes de codage et de décodage des coordonnées de position des données de nuage de points dans les modes de réalisation de la présente invention en combinaison avec la plage de valeurs du champ de vision du dispositif d'acquisition des données de nuage de points pendant l'acquisition, la présente invention peut réduire rapidement certaines régions où les données de nuage de points ne doivent pas exister, réduisant ainsi le surdébit de temps dans les processus de codage et de décodage, et améliorant l'efficacité de codage et de décodage.
PCT/CN2019/089787 2019-06-03 2019-06-03 Procédés et systèmes de codage et de décodage de coordonnées de position de données de nuage de points, et support d'informations WO2020243874A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980008580.XA CN111602176A (zh) 2019-06-03 2019-06-03 点云数据的位置坐标的编解码方法、系统和存储介质
PCT/CN2019/089787 WO2020243874A1 (fr) 2019-06-03 2019-06-03 Procédés et systèmes de codage et de décodage de coordonnées de position de données de nuage de points, et support d'informations

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/089787 WO2020243874A1 (fr) 2019-06-03 2019-06-03 Procédés et systèmes de codage et de décodage de coordonnées de position de données de nuage de points, et support d'informations

Publications (1)

Publication Number Publication Date
WO2020243874A1 true WO2020243874A1 (fr) 2020-12-10

Family

ID=72191952

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089787 WO2020243874A1 (fr) 2019-06-03 2019-06-03 Procédés et systèmes de codage et de décodage de coordonnées de position de données de nuage de points, et support d'informations

Country Status (2)

Country Link
CN (1) CN111602176A (fr)
WO (1) WO2020243874A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220167016A1 (en) * 2019-06-26 2022-05-26 Tencent America LLC Implicit quadtree or binary-tree geometry partition for point cloud coding

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114187401A (zh) * 2020-09-15 2022-03-15 鹏城实验室 一种点云属性编码方法、解码方法、编码设备及解码设备
CN112565794B (zh) * 2020-12-03 2022-10-04 西安电子科技大学 一种点云孤立点编码、解码方法及装置
CN114598892B (zh) * 2020-12-07 2024-01-30 腾讯科技(深圳)有限公司 点云数据编码方法、解码方法、装置、设备及存储介质
WO2022141453A1 (fr) * 2020-12-31 2022-07-07 深圳市大疆创新科技有限公司 Procédé et appareil de codage de nuage de points, procédé et appareil de décodage de nuage de points, et système de codage et de décodage
CN113836095A (zh) * 2021-09-26 2021-12-24 广州极飞科技股份有限公司 一种点云数据存储方法、装置、存储介质及电子设备
CN115102934B (zh) * 2022-06-17 2023-09-19 腾讯科技(深圳)有限公司 点云数据的解码方法、编码方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104715496A (zh) * 2015-03-23 2015-06-17 中国科学技术大学 云环境下基于三维点云模型的图像预测方法、系统及装置
CN108171761A (zh) * 2017-12-13 2018-06-15 北京大学 一种基于傅里叶图变换的点云帧内编码方法及装置
CN108335335A (zh) * 2018-02-11 2018-07-27 北京大学深圳研究生院 一种基于增强图变换的点云属性压缩方法
CN109076173A (zh) * 2017-11-21 2018-12-21 深圳市大疆创新科技有限公司 输出影像生成方法、设备及无人机

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016152586A (ja) * 2015-02-19 2016-08-22 国立大学法人電気通信大学 プロジェクションマッピング装置、映像投影制御装置、映像投影制御方法および映像投影制御プログラム
CN106846425B (zh) * 2017-01-11 2020-05-19 东南大学 一种基于八叉树的散乱点云压缩方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104715496A (zh) * 2015-03-23 2015-06-17 中国科学技术大学 云环境下基于三维点云模型的图像预测方法、系统及装置
CN109076173A (zh) * 2017-11-21 2018-12-21 深圳市大疆创新科技有限公司 输出影像生成方法、设备及无人机
CN108171761A (zh) * 2017-12-13 2018-06-15 北京大学 一种基于傅里叶图变换的点云帧内编码方法及装置
CN108335335A (zh) * 2018-02-11 2018-07-27 北京大学深圳研究生院 一种基于增强图变换的点云属性压缩方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220167016A1 (en) * 2019-06-26 2022-05-26 Tencent America LLC Implicit quadtree or binary-tree geometry partition for point cloud coding
US11743498B2 (en) * 2019-06-26 2023-08-29 Tencent America LLC Implicit quadtree or binary-tree geometry partition for point cloud coding

Also Published As

Publication number Publication date
CN111602176A (zh) 2020-08-28

Similar Documents

Publication Publication Date Title
WO2020243874A1 (fr) Procédés et systèmes de codage et de décodage de coordonnées de position de données de nuage de points, et support d'informations
CN111247802B (zh) 用于三维数据点集处理的方法和设备
US20210343047A1 (en) Three-dimensional data point encoding and decoding method and device
CN111247798B (zh) 对三维数据点集进行编码或解码的方法和设备
WO2022126427A1 (fr) Procédé de traitement de nuage de points, appareil de traitement de nuage de points, plateforme mobile, et support de stockage informatique
US20210335015A1 (en) Three-dimensional data point encoding and decoding method and device
CN114503440A (zh) 基于树的点云编解码的角度模式
US11580672B2 (en) Angular mode simplification for geometry-based point cloud compression
Tu et al. Motion analysis and performance improved method for 3D LiDAR sensor data compression
JP6919764B2 (ja) レーダ画像処理装置、レーダ画像処理方法、および、プログラム
US20210255289A1 (en) Light detection method, light detection device, and mobile platform
KR102025113B1 (ko) LiDAR를 이용한 이미지 생성 방법 및 이를 위한 장치
US11842520B2 (en) Angular mode simplification for geometry-based point cloud compression
US20220108493A1 (en) Encoding/decoding method and device for three-dimensional data points
WO2021232227A1 (fr) Procédé de construction de trame de nuage de points, procédé de détection de cible, appareil de télémétrie, plateforme mobile et support de stockage
CN112689997B (zh) 点云的排序方法和装置
CN116091700A (zh) 三维重建方法、装置、终端设备及计算机可读介质
CN114080545A (zh) 数据处理方法、装置、激光雷达和存储介质
WO2023112105A1 (fr) Dispositif de codage, procédé de codage et programme
WO2020142879A1 (fr) Procédé de traitement de données, dispositif de détection, dispositif de traitement de données et plateforme mobile
EP4172941A1 (fr) Angles de laser triés pour une compression en nuage de points basée sur la géométrie (g-pcc)
JP2022139722A (ja) 情報処理装置、制御方法、プログラム及び記憶媒体
CN115265558A (zh) 全局初始化定位方法、装置和自动驾驶车辆

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19931614

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19931614

Country of ref document: EP

Kind code of ref document: A1