CN115082512A - Point cloud motion estimation method and device, electronic equipment and storage medium - Google Patents

Point cloud motion estimation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115082512A
CN115082512A CN202210806269.3A CN202210806269A CN115082512A CN 115082512 A CN115082512 A CN 115082512A CN 202210806269 A CN202210806269 A CN 202210806269A CN 115082512 A CN115082512 A CN 115082512A
Authority
CN
China
Prior art keywords
motion
point cloud
frame
cloud frame
estimated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210806269.3A
Other languages
Chinese (zh)
Inventor
李革
邵薏婷
李宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University Shenzhen Graduate School
Original Assignee
Peking University Shenzhen Graduate School
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University Shenzhen Graduate School filed Critical Peking University Shenzhen Graduate School
Priority to CN202210806269.3A priority Critical patent/CN115082512A/en
Publication of CN115082512A publication Critical patent/CN115082512A/en
Priority to PCT/CN2022/136842 priority patent/WO2024007523A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching

Abstract

The disclosure provides a method and a device for estimating point cloud motion, an electronic device and a storage medium, wherein a point cloud frame to be estimated and a reference point cloud frame are obtained; sampling a motion center point in a reference point cloud frame to construct a point cloud motion deformation map; determining a first matching relation between a reference point cloud frame and a point cloud frame to be estimated; constructing a point cloud motion energy equation according to the first matching relation and the point cloud motion deformation map; solving a point cloud motion energy equation to determine a sparse non-rigid motion field corresponding to a motion center point; interpolating the sparse non-rigid motion field to determine a fine non-rigid motion field; compensating the reference point cloud frame based on the fine non-rigid motion field to obtain a reference motion compensation frame; determining a second matching relation between the point cloud frame to be estimated and the reference motion compensation frame; constructing an inter-frame rate distortion function between the point cloud frame to be estimated and the reference motion compensation frame based on the second matching relation; and iteratively estimating the interframe rate distortion function to determine the point cloud motion. The estimation precision of the point cloud motion can be improved.

Description

Point cloud motion estimation method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of point cloud compression, and in particular, to a method and an apparatus for estimating a point cloud motion, an electronic device, and a storage medium.
Background
At present, with the rapid development of three-dimensional scanning equipment, it is possible to rapidly digitize three-dimensional information in the real world, and point clouds are gradually becoming an effective way to express three-dimensional scenes and three-dimensional surfaces of objects. The point cloud is obtained by sampling the surface of an object by a three-dimensional scanning device, the number of points in one frame of point cloud is large, each point comprises geometric information, color, texture and other attribute information, and the information amount is large; the dynamic point cloud is a collection of point cloud frames continuously acquired from a moving object or a moving scene, so that the data volume of the point cloud sequence is more huge. Point cloud compression is an imperative task considering the large data volume of point clouds and the limited bandwidth of network transmission. How to fully utilize the interframe correlation to remove the time domain information redundancy is a key problem of dynamic point cloud compression, wherein point cloud motion estimation is an active and promising research field.
In the existing point cloud motion estimation scheme, a macroblock is obtained by performing spatial decomposition on a point cloud by using an octree, texture variance in each macroblock is calculated, and for the macroblock with the texture variance smaller than a threshold value, a reference block with the same spatial position is selected from a reference frame, a matching relationship between the reference block and a current block is constructed, and point cloud motion is estimated.
Disclosure of Invention
The embodiment of the disclosure at least provides a method and a device for estimating point cloud motion, an electronic device and a storage medium, which can improve the estimation precision of the point cloud motion.
The embodiment of the disclosure provides a method for estimating point cloud motion, which comprises the following steps:
acquiring a point cloud frame to be estimated, and determining a reference point cloud frame corresponding to the point cloud frame to be estimated;
sampling a plurality of motion center points in the reference point cloud frame, and constructing a point cloud motion deformation graph reflecting the motion association relation between the motion center points based on the motion center points;
determining a first matching relation between the reference point cloud frame and the point cloud frame to be estimated; constructing a point cloud motion energy equation between the reference point cloud frame and the point cloud frame to be estimated according to the first matching relation and the point cloud motion deformation map;
iteratively solving the point cloud motion energy equation to determine a sparse non-rigid motion field corresponding to the motion center point; interpolating the sparse non-rigid motion field, and determining a fine non-rigid motion field corresponding to the reference point cloud frame;
compensating the reference point cloud frame based on the fine non-rigid motion field to obtain a reference motion compensation frame corresponding to the reference point cloud frame;
determining a second matching relationship between the point cloud frame to be estimated and the reference motion compensation frame; constructing an inter-frame rate distortion function between the point cloud frame to be estimated and the reference motion compensation frame based on the second matching relation; and iteratively estimating the inter-frame rate distortion function, and determining the point cloud motion between the point cloud frame to be estimated and the reference point cloud frame.
In an optional embodiment, the sampling a plurality of motion center points in the reference point cloud frame, and constructing a point cloud motion deformation map reflecting a motion association relationship between the motion center points based on the motion center points specifically includes:
selecting a target coordinate axis with the largest geometric distribution variance from the reference point cloud frame, wherein the target coordinate axis is formed by connecting points in the reference point cloud frame;
sampling a plurality of motion center points on the target coordinate axis according to a preset sampling step length;
dividing the reference point cloud frame into a plurality of point cloud subsets by taking the motion center point as a center according to a preset sampling radius;
traversing all the motion center points, and determining whether an intersection point exists between the point cloud subsets corresponding to every two motion center points; if yes, connecting the motion center points to form the point cloud motion deformation graph.
In an optional embodiment, the determining a first matching relationship between the reference point cloud frame and the point cloud frame to be estimated; according to the first matching relationship and the point cloud motion deformation map, a point cloud motion energy equation between the reference point cloud frame and the point cloud frame to be estimated is constructed, and the method specifically comprises the following steps:
aiming at each point in the reference point cloud frame, performing motion compensation on the point according to the corresponding motion center point, and determining a motion compensation point corresponding to the point;
determining the point matching relationship between the motion compensation point and the corresponding point in the point cloud frame to be estimated, and forming the first matching relationship by all the point matching relationships;
according to the first matching relation, constructing a geometric distortion item reflecting geometric distortion between the reference point cloud frame and the point cloud frame to be estimated;
determining a motion association relation between the motion central points according to the point cloud motion deformation graph, and constructing a motion difference item reflecting motion difference between the motion central points according to the motion association relation;
configuring a corresponding motion rigid constraint item for each motion central point;
and constructing the motion energy equation based on the geometric distortion term, the motion difference term and the motion rigidity constraint term.
In an alternative embodiment, the point matching relationship includes a forward matching relationship and a backward matching relationship, where the forward matching relationship represents the point matching relationship from the reference point cloud frame to the point cloud frame to be estimated; and the backward matching relation represents the point-to-point matching relation from the point cloud frame to be estimated to the reference point cloud frame.
In an alternative embodiment, the point cloud motion energy equation is solved iteratively to determine a sparse non-rigid motion field corresponding to the motion center point; compensating the sparse non-rigid motion field, and determining a fine non-rigid motion field corresponding to the reference point cloud frame, specifically comprising:
iteratively solving the point cloud motion energy equation, determining a target non-rigid motion field which minimizes the point cloud motion energy equation, and taking the target non-rigid motion field as a sparse non-rigid motion field corresponding to the motion center point;
processing the sparse non-rigid motion field by an interpolation method to estimate motion vectors corresponding to other points except the motion center point in the reference point cloud frame;
and combining the motion vector with the sparse non-rigid motion field to determine a fine non-rigid motion field corresponding to the reference point cloud frame.
In an optional embodiment, the determining a second matching relationship between the point cloud frame to be estimated and the reference motion compensation frame; constructing an inter-frame rate distortion function between the point cloud frame to be estimated and the reference motion compensation frame based on the second matching relation; iteratively estimating the inter-frame rate distortion function, and determining the point cloud motion between the point cloud frame to be estimated and the reference point cloud frame, specifically including:
respectively dividing the point cloud frame to be estimated and the reference motion compensation frame into a plurality of motion prediction blocks in the same dividing mode;
determining the inter-block matching relationship between the point cloud frame to be estimated and the corresponding motion prediction block in the reference motion compensation frame, wherein the second matching relationship is formed by all the inter-block matching relationships;
constructing an inter-block rate distortion function between the corresponding motion prediction blocks based on the inter-block matching relationship, wherein the inter-block rate distortion function is formed by all the inter-block rate distortion functions;
calculating a motion estimation vector which enables the rate distortion cost corresponding to the inter-block rate distortion function to be minimum, and taking the motion estimation vector as the motion prediction block to perform inter-block point cloud motion between the point cloud frame to be estimated and the reference point cloud frame;
and combining all the inter-block point cloud motions to form the point cloud motion between the point cloud frame to be estimated and the reference point cloud frame.
The embodiment of the present disclosure further provides an estimation apparatus for point cloud motion, the apparatus including:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a point cloud frame to be estimated and determining a reference point cloud frame corresponding to the point cloud frame to be estimated;
the motion deformation map building module is used for sampling a plurality of motion center points in the reference point cloud frame and building a point cloud motion deformation map reflecting the motion association relation between the motion center points on the basis of the motion center points;
the energy equation construction module is used for determining a first matching relation between the reference point cloud frame and the point cloud frame to be estimated; according to the first matching relation and the point cloud motion deformation map, constructing a point cloud motion energy equation between the reference point cloud frame and the point cloud frame to be estimated;
the non-rigid motion field determining module is used for solving the point cloud motion energy equation in an iteration mode so as to determine a sparse non-rigid motion field corresponding to the motion center point; compensating the sparse non-rigid motion field, and determining a fine non-rigid motion field corresponding to the reference point cloud frame;
a reference frame motion compensation module, configured to compensate the reference point cloud frame based on the fine non-rigid motion field to obtain a reference motion compensation frame corresponding to the reference point cloud frame;
the local motion estimation module is used for determining a second matching relation between the point cloud frame to be estimated and the reference motion compensation frame; constructing an inter-frame rate distortion function between the point cloud frame to be estimated and the reference motion compensation frame based on the second matching relation; and iteratively estimating the inter-frame rate distortion function, and determining the point cloud motion between the point cloud frame to be estimated and the reference point cloud frame.
In an optional implementation manner, the motion deformation map building module is specifically configured to:
selecting a target coordinate axis with the largest geometric distribution variance from the reference point cloud frame, wherein the target coordinate axis is formed by connecting points in the reference point cloud frame;
sampling a plurality of motion center points on the target coordinate axis according to a preset sampling step length;
dividing the reference point cloud frame into a plurality of point cloud subsets by taking the motion center point as a center according to a preset sampling radius;
traversing all the motion center points, and determining whether an intersection point exists between the point cloud subsets corresponding to every two motion center points; if yes, connecting the motion center points to form the point cloud motion deformation graph.
In an optional implementation manner, the energy equation building module is specifically configured to:
aiming at each point in the reference point cloud frame, performing motion compensation on the point according to the corresponding motion center point, and determining a motion compensation point corresponding to the point;
determining the point matching relationship between the motion compensation point and the corresponding point in the point cloud frame to be estimated, and forming the first matching relationship by all the point matching relationships;
according to the first matching relation, constructing a geometric distortion item reflecting geometric distortion between the reference point cloud frame and the point cloud frame to be estimated;
determining a motion association relation between the motion central points according to the point cloud motion deformation graph, and constructing a motion difference item reflecting motion difference between the motion central points according to the motion association relation;
configuring a corresponding motion rigid constraint item for each motion central point;
and constructing the motion energy equation based on the geometric distortion term, the motion difference term and the motion rigidity constraint term.
In an alternative embodiment, the non-rigid motion field determination module is specifically configured to:
iteratively solving the point cloud motion energy equation, determining a target non-rigid motion field which minimizes the point cloud motion energy equation, and taking the target non-rigid motion field as a sparse non-rigid motion field corresponding to the motion center point;
processing the sparse non-rigid motion field through an interpolation method to estimate motion vectors corresponding to other points except the motion central point in the reference point cloud frame;
and combining the motion vector with the sparse non-rigid motion field to determine a fine non-rigid motion field corresponding to the reference point cloud frame.
In an optional implementation manner, the motion estimation module is specifically configured to:
respectively dividing the point cloud frame to be estimated and the reference motion compensation frame into a plurality of motion prediction blocks in the same dividing mode;
determining the inter-block matching relationship between the point cloud frame to be estimated and the corresponding motion prediction block in the reference motion compensation frame, wherein the second matching relationship is formed by all the inter-block matching relationships;
constructing an inter-block rate distortion function between the corresponding motion prediction blocks based on the inter-block matching relationship, wherein the inter-block rate distortion function is formed by all the inter-block rate distortion functions;
calculating a motion estimation vector which enables the rate distortion cost corresponding to the inter-block rate distortion function to be minimum, and taking the motion estimation vector as the motion prediction block to perform inter-block point cloud motion between the point cloud frame to be estimated and the reference point cloud frame;
and combining all the inter-block point cloud motions to form the point cloud motion between the point cloud frame to be estimated and the reference point cloud frame.
An embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the above method of estimating a point cloud motion, or steps of any possible implementation of the above method of estimating a point cloud motion.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to perform the method for estimating a point cloud motion or the steps in any possible implementation manner of the method for estimating a point cloud motion.
According to the estimation method, the estimation device, the electronic equipment and the storage medium of the point cloud motion, a reference point cloud frame corresponding to a point cloud frame to be estimated is determined by acquiring the point cloud frame to be estimated; sampling a plurality of motion center points in a reference point cloud frame, and constructing a point cloud motion deformation graph reflecting the motion association relation between the motion center points based on the motion center points; determining a first matching relation between a reference point cloud frame and a point cloud frame to be estimated; according to the first matching relation and the point cloud motion deformation map, a point cloud motion energy equation between a reference point cloud frame and a point cloud frame to be estimated is constructed; iteratively solving a point cloud motion energy equation to determine a sparse non-rigid motion field corresponding to the motion center point; interpolating the sparse non-rigid motion field, and determining a fine non-rigid motion field corresponding to the reference point cloud frame; compensating the reference point cloud frame based on the fine non-rigid motion field to obtain a reference motion compensation frame corresponding to the reference point cloud frame; determining a second matching relation between the point cloud frame to be estimated and the reference motion compensation frame; based on the second matching relation, constructing an inter-frame rate distortion function between the point cloud frame to be estimated and the reference motion compensation frame; and iteratively estimating an interframe rate distortion function, and determining point cloud motion between the point cloud frame to be estimated and the reference point cloud frame. The estimation precision of the point cloud motion can be improved.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of a method for estimating a point cloud motion provided by an embodiment of the present disclosure;
FIG. 2 shows a flow chart for constructing a point cloud motion energy equation provided by an embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating an apparatus for estimating a point cloud motion provided by an embodiment of the disclosure;
fig. 4 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
According to researches, in the existing point cloud motion estimation scheme, a macro block is obtained by performing spatial decomposition on point cloud by adopting an octree, texture variance in each macro block is calculated, and for the macro block with the texture variance smaller than a threshold value, a reference block with the same spatial position is selected from a reference frame, the matching relation between the reference block and a current block is constructed, and the point cloud motion is estimated, but the estimation precision is poor.
Based on the research, the present disclosure provides a method, an apparatus, an electronic device and a storage medium for estimating a point cloud motion, wherein a reference point cloud frame corresponding to a point cloud frame to be estimated is determined by acquiring the point cloud frame to be estimated; sampling a plurality of motion center points in a reference point cloud frame, and constructing a point cloud motion deformation graph reflecting the motion association relation between the motion center points based on the motion center points; determining a first matching relation between a reference point cloud frame and a point cloud frame to be estimated; according to the first matching relationship and the point cloud motion deformation map, a point cloud motion energy equation between the reference point cloud frame and the point cloud frame to be estimated is constructed; iteratively solving a point cloud motion energy equation to determine a sparse non-rigid motion field corresponding to the motion center point; interpolating the sparse non-rigid motion field, and determining a fine non-rigid motion field corresponding to the reference point cloud frame; compensating the reference point cloud frame based on the fine non-rigid motion field to obtain a reference motion compensation frame corresponding to the reference point cloud frame; determining a second matching relation between the point cloud frame to be estimated and the reference motion compensation frame; based on the second matching relation, constructing an inter-frame rate distortion function between the point cloud frame to be estimated and the reference motion compensation frame; and iteratively estimating an interframe rate distortion function, and determining point cloud motion between the point cloud frame to be estimated and the reference point cloud frame. The estimation precision of the point cloud motion can be improved.
To facilitate understanding of the present embodiment, first, a detailed description is given of a point cloud motion estimation method disclosed in an embodiment of the present disclosure, and an execution subject of the point cloud motion estimation method provided in the embodiment of the present disclosure is generally a computer device with certain computing capability, where the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or a server or other processing device. In some possible implementations, the method of estimating the point cloud motion may be implemented by a processor invoking computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a method for estimating a point cloud motion provided in an embodiment of the present disclosure is shown, where the method includes steps S101 to S106, where:
s101, a point cloud frame to be estimated is obtained, and a reference point cloud frame corresponding to the point cloud frame to be estimated is determined.
In a specific implementation, the method for estimating the point cloud motion is applied to a time sequence dynamic point cloud frame sequence, which is composed of a plurality of point cloud frames, and each point cloud frame includes a point cloud composed of a plurality of points. Motion estimation is performed for the point cloud frame to be estimated starting from the second frame in the time sequence dynamic point cloud frame sequence.
Preferably, in the time sequence dynamic point cloud frame sequence, a previous frame of the point cloud frame to be estimated is used as a reference point cloud frame.
S102, sampling a plurality of motion center points in the reference point cloud frame, and constructing a point cloud motion deformation graph reflecting the motion association relation between the motion center points based on the motion center points.
In specific implementation, a plurality of motion center points reflecting non-rigid motion centers in a reference point cloud frame are sampled in the reference point cloud frame, and then the motion center points are used as nodes to be connected based on all the motion center points, so that a point cloud motion deformation graph reflecting motion association relations among the motion center points is constructed.
As a possible implementation manner, the method for constructing the point cloud motion deformation map specifically includes:
selecting a target coordinate axis with the largest geometric distribution variance from the reference point cloud frame, wherein the target coordinate axis is formed by connecting points in the reference point cloud frame; sampling a plurality of motion center points on the target coordinate axis according to a preset sampling step length; dividing the reference point cloud frame into a plurality of point cloud subsets by taking the motion center point as a center according to a preset sampling radius; traversing all the motion center points, and determining whether an intersection point exists between the point cloud subsets corresponding to every two motion center points; if yes, connecting the motion center points to form the point cloud motion deformation graph.
Specifically, the target coordinate axis with the largest geometric distribution variance can be determined based on the following method:
firstly, traversing the three-dimensional coordinates of all points in a reference point cloud frame to determine the minimum coordinate x of an x axis min And the maximum coordinate x max Y-axis minimum coordinate y min And the maximum coordinate y max And z-axis minimum coordinate z min And the maximum coordinate z max . Secondly, according to the x-axis minimum coordinate x min And the maximum coordinate x max Y-axis minimum coordinate y min And maximum coordinate y max And z-axis minimum coordinate z min And the maximum coordinate z max And constructing a point cloud bounding box in the test point cloud frame, and determining the longest edge in the point cloud bounding box as a target coordinate axis with the largest geometric distribution variance in the reference point cloud frame.
Alternatively, the formula for constructing the point cloud bounding box can be expressed as:
B=(x max -x min )×(y max -y min )×(z max -z min )
wherein B represents a point cloud bounding box.
Further, in the process of dividing the reference point cloud frame into a plurality of point cloud subsets, the target coordinate axis with the largest geometric distribution variance in the reference point cloud frame may be first used as a rearrangement axis, and all points on the axis are renumbered along the rearrangement axis, for example: (n) 1 ,...,n i ) (ii) a Then, the renumbered first point n 1 As a first center point of motion c 1 From the center point of motion c 1 Start, delay rearrangementThe axis adopts a preset sampling step length, and a plurality of motion center points (c) are sampled on the axis at equal intervals 1 ,...,c k ) Traversing all remaining points except the motion center point in the reference point cloud frame, calculating Euclidean distance between each remaining point and each motion center point, and if the Euclidean distance from the point to a certain motion center point is smaller than a preset sampling radius, determining that the point belongs to the point cloud subset to which the sampling center belongs; if the distances from the point to all the motion centers are larger than the preset sampling radius, the point is used as a new sampling center; and finally, after the traversal is finished, a series of motion center points and the corresponding point cloud subsets can be generated. Wherein there may be an intersection of points between the point cloud subsets, the intersection points falling within the radiation range of the plurality of point cloud subsets.
It should be noted that the preset sampling step length and the preset sampling radius may be selected according to actual needs, and are not limited herein.
Further, in the process of constructing the point cloud motion deformation map, a map G may be constructed for the sampled motion center points, and all the motion center points C ═ C 1 ,c 2 ,…,c k All are nodes on the graph G, when the motion center point c i Corresponding point cloud subset P i And center of motion c j Point cloud subset P of j There is a point set of intersections between them, then an edge ε is used ij And connecting the two motion centers, and forming a point cloud motion deformation map after all the connection is finished.
S103, determining a first matching relation between the reference point cloud frame and the point cloud frame to be estimated; and constructing a point cloud motion energy equation between the reference point cloud frame and the point cloud frame to be estimated according to the first matching relation and the point cloud motion deformation map.
In specific implementation, in order to find the optimal non-rigid transformation of each motion center point so that the point in the reference point cloud frame is as close as possible to the point matched with the point in the point cloud frame to be estimated, the point cloud motion energy equation is introduced to help find the optimal non-rigid transformation of the optimal motion center point. Here, the first matching relationship reflects a matching relationship between corresponding points between the reference point cloud frame and the point cloud frame to be estimated, and further reflects a matching relationship between the reference point cloud frame and the point cloud frame to be estimated from a matching relationship between all the corresponding points.
Wherein, the non-rigid transformation of the motion center point can be represented by a rotation matrix and a translation vector.
As a possible implementation manner, referring to fig. 2, a flowchart for constructing a point cloud motion energy equation provided in an embodiment of the present disclosure is shown, where the method includes steps S1031 to S1036, where:
and S1031, aiming at each point in the reference point cloud frame, performing motion compensation on the point according to the corresponding motion central point, and determining a motion compensation point corresponding to the point.
Here, the points in the reference point cloud frame are affected by the corresponding motion center, and the corresponding motion compensation points may be generated.
Specifically, determining the corresponding motion compensation point in the reference point cloud frame may be expressed based on the following formula:
Figure BDA0003737795520000131
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003737795520000132
representing motion compensation points; v. of i Representing points in a reference point cloud frame; u. of j Representing points in a point cloud frame to be estimated; c. C j Representing a motion center point; r j Representing a rotation matrix corresponding to the motion central point; t is t j Representing translation vectors corresponding to the motion central points; t represents the non-rigid transformation of the motion center point; omega ij Representative of snacks in sports c j For points v in a reference point cloud frame i Presetting a weight coefficient of interpolation of the motion to be estimated; i (v) i )={c j |dist(v i -c j ) R is a point v in the reference point cloud frame i The motion center belongs to the set, and r represents a preset sampling radius.
To be explainedIs, ω ij The value of (b) can be selected according to actual needs, and is not particularly limited herein.
S1032, determining the point matching relationship between the motion compensation point and the corresponding point in the point cloud frame to be estimated, and forming the first matching relationship by all the point matching relationships.
Specifically, the point matching relationship comprises a forward matching relationship and a backward matching relationship, wherein the forward matching relationship represents the point matching relationship from the reference point cloud frame to the point cloud frame to be estimated; and the backward matching relationship represents the point-to-point matching relationship from the point cloud frame to be estimated to the reference point cloud frame.
Here, the forward matching relationship may be a matching relationship between forward matching pairs, wherein the forward matching pairs
Figure BDA0003737795520000133
For points in a reference point cloud frame
Figure BDA0003737795520000134
And
Figure BDA0003737795520000135
corresponding matching point u in point cloud frame to be estimated map (i) (ii) a The backward matching relationship may be a matching relationship between a backward matching pair, wherein the backward matching pair
Figure BDA0003737795520000136
For a point u in a point cloud frame to be estimated j And u j Corresponding matching points in a reference point cloud frame
Figure BDA0003737795520000137
S1033, according to the first matching relation, a geometric distortion item reflecting geometric distortion between the reference point cloud frame and the point cloud frame to be estimated is constructed.
Specifically, the geometric distortion term may be constructed based on the following formula:
Figure BDA0003737795520000141
wherein E is distortion (T) represents a geometric distortion term; map (·) represents an index mapping function of the matching relationship between points; alpha is alpha for And alpha back Representing respectively the forward geometric distortion and the backward geometric distortion at E distortion (T) weight coefficient of
Figure BDA0003737795520000142
Form a forward geometric distortion of
Figure BDA0003737795520000143
n represents the number of points in the reference point cloud frame, and m represents the number of points in the point cloud frame to be estimated;
Figure BDA0003737795520000144
s1034, determining a motion association relation between the motion central points according to the point cloud motion deformation graph, and constructing a motion difference item reflecting the motion difference between the motion central points according to the motion association relation.
Here, the motion correlation between the motion center points may be represented by a translation vector and a rotation matrix corresponding to each motion center point, and the motion correlation between the motion center points may be further expanded to be a motion correlation between the sub-point cloud sets in the reference point cloud frame.
Specifically, the motion difference term may be constructed based on the following formula:
Figure BDA0003737795520000145
wherein E is motionD (T) represents a motion difference term; c. C j Representing a center point of motion; r j Representing a rotation matrix corresponding to the motion central point; t is t j Representing translation vectors corresponding to the motion central points; k represents the number of the motion center points in the reference point cloud frame; n (c) i )={c ji,j 1 represents the motion deformation map of the point cloud and the motion center point c i A set of connected neighboring motion center points;
Figure BDA0003737795520000146
and S1035, configuring a corresponding motion rigidity constraint item for each motion center point.
Here, a motion rigidity constraint term is introduced for the non-rigid motion of each motion center, wherein the motion rigidity constraint term can be determined according to the rotation matrix corresponding to the motion center point.
Specifically, the kinematic rigidity constraint term may be constructed based on the following formula:
Figure BDA0003737795520000151
wherein E is motionC (T) represents a kinematic stiffness constraint term; r j Representing a rotation matrix corresponding to the motion central point; SVD (R) i ) Is a rotation matrix R i Performing singular value decomposition to obtain a feature matrix; k represents the number of the motion center points in the reference point cloud frame.
S1036, constructing the motion energy equation based on the geometric distortion term, the motion difference term and the motion rigidity constraint term.
In specific implementation, corresponding weight coefficients are configured for the motion difference term and the motion rigidity constraint term respectively, and the motion energy equation is constructed by summing the motion difference term and the motion rigidity constraint term after the weight coefficients are configured and the geometric distortion term.
Specifically, the energy of motion equation can be constructed based on the following formula:
J=E distortion (T)+λ d E motionD (T)+λ c E motionC (T)
wherein J represents a point cloud motion energy equation; e distortion (T) represents a geometric distortion term; e motionD (T) generationA table motion difference term; e motionC (T) represents a kinematic stiffness constraint term; lambda [ alpha ] d Representing the weight coefficient corresponding to the motion difference item; lambda [ alpha ] c And representing the weight coefficient corresponding to the motion rigidity constraint term.
It should be noted that the weight coefficient λ corresponding to the motion difference term d Weight coefficient lambda corresponding to kinematic rigidity constraint term c The selection can be performed according to actual needs, and is not limited specifically herein.
S104, iteratively solving the point cloud motion energy equation to determine a sparse non-rigid motion field corresponding to the motion center point; and carrying out interpolation aiming at the sparse non-rigid motion field, and determining a fine non-rigid motion field corresponding to the reference point cloud frame.
In specific implementation, after a point cloud motion energy equation for point cloud non-rigid motion estimation is constructed, iterative solution is performed on the point cloud motion energy equation, a sparse non-rigid motion field for minimizing the point cloud motion energy equation is calculated by taking an optimized point cloud motion energy equation as a target, however, since the sparse non-rigid motion field is only composed of non-rigid motions of all motion center points in a reference point cloud frame, the non-rigid motion states of all points in the reference point cloud frame cannot be fully reflected, interpolation up-sampling needs to be further performed according to the sparse non-rigid motion field to obtain a fine non-rigid motion field for reflecting the non-rigid motion states of all points in the reference point cloud frame.
The point cloud motion reflected by the sparse non-rigid motion field and the fine non-rigid motion field is a motion based on the reference point cloud frame.
Specifically, the motion of other points in the reference point cloud frame except the motion center point may be estimated by using a joint interpolation method based on distance correlation, and the motion of other points in the reference point cloud frame except the motion center point is compensated into the sparse non-rigid motion field, so as to obtain the fine non-rigid motion field, where the content in step S1031 may be referred to in a manner of estimating the motion of other points in the reference point cloud frame except the motion center point.
As a possible implementation manner, the method for determining the fine non-rigid motion field corresponding to the reference point cloud frame may specifically include the following steps S1041 to S1043:
s1041, iteratively solving the point cloud motion energy equation, determining a target non-rigid motion field minimizing the point cloud motion energy equation, and taking the target non-rigid motion field as a sparse non-rigid motion field corresponding to the motion center point.
Specifically, the method for iteratively solving the point cloud motion energy equation may include: before the first iteration, the non-rigid motion of each motion central point is initialized, and the initial values of the geometric distortion item, the motion difference item and the motion rigidity constraint item are obtained.
Furthermore, in each iteration, an optimization energy equation is taken as a target, and a non-rigid motion field which minimizes the point cloud motion energy equation is obtained under the condition that the point matching relationship between the motion compensation point and the corresponding point in the point cloud frame to be estimated is obtained. Based on a maximization minimization optimization framework, designing a replacement function of a motion compensation geometric distortion term and a non-rigid motion difference term to obtain a replacement point cloud motion energy equation, a gradient equation and a Hessian matrix initial value, and designing an L-BFGS algorithm-based iterative optimization scheme to obtain a sparse non-rigid motion field which enables the replacement energy equation to be minimized.
Finally, after each iteration, updating the geometric position of the reference point cloud frame based on the obtained sparse non-rigid motion field, and establishing the point-to-point matching relationship between the updated reference point cloud frame and the point cloud frame to be estimated again to be used as the input of the next iteration; and the result of the last iteration is used as the input of a substitute function in the optimization of the current iteration. And terminating the iteration until a preset iteration number is reached or the geometric distortion before and after the iteration is smaller than a set threshold, wherein the set threshold and the preset iteration number can be selected according to actual needs, and no specific limitation is imposed on the selection.
And S1042, processing the sparse non-rigid motion field through an interpolation method to estimate motion vectors corresponding to other points except the motion center point in the reference point cloud frame.
And S1043, combining the motion vector with the sparse non-rigid motion field, and determining a fine non-rigid motion field corresponding to the reference point cloud frame.
In specific implementation, motion vectors corresponding to other points in the reference point cloud frame except for the motion center point may be supplemented to the sparse non-rigid motion field to obtain a fine non-rigid motion field that comprehensively reflects non-rigid motion states of all points in the reference point cloud frame.
And S105, compensating the reference point cloud frame based on the fine non-rigid motion field to obtain a reference motion compensation frame corresponding to the reference point cloud frame.
In specific implementation, after a fine non-rigid motion field reflecting non-rigid motion states of all points in a reference point cloud frame is obtained, motion compensation is performed on the reference point cloud frame according to the fine non-rigid motion field, the obtained reference motion compensation frame not only comprises the point cloud itself but also comprises non-rigid motion of each point, and the points included in the reference motion compensation frame are motion compensation points.
S106, determining a second matching relation between the point cloud frame to be estimated and the reference motion compensation frame; constructing an inter-frame rate distortion function between the point cloud frame to be estimated and the reference motion compensation frame based on the second matching relation; and iteratively estimating the inter-frame rate distortion function, and determining the point cloud motion between the point cloud frame to be estimated and the reference point cloud frame.
In specific implementation, the second matching relationship is similar to the first matching relationship, but reflects the matching relationship between the point in the point cloud frame to be estimated and the motion compensation point corresponding to the point in the reference motion compensation frame, and further reflects the matching relationship between the point cloud frame to be estimated and the reference motion compensation frame according to the matching relationship between all the corresponding points.
Here, the inter-frame rate distortion function is used to describe joint cost between a geometric distortion item and a code rate cost item between the point cloud frame to be estimated and the reference motion compensation frame in the second matching relationship.
It should be noted that the point cloud motion estimated based on the inter-frame rate distortion function is a motion based on a point cloud frame to be estimated, and the presentation manner may be a non-rigid motion field.
As a possible implementation, the point cloud motion between the point cloud frame to be estimated and the reference point cloud frame may be determined based on the following steps S1061-S1065:
and S1061, respectively dividing the point cloud frame to be estimated and the reference motion compensation frame into a plurality of motion prediction blocks in the same dividing mode.
As a possible implementation manner, an octree partition method may be adopted, in which a point cloud frame to be estimated and a reference motion compensation frame are respectively partitioned to a preset depth according to a preset prediction block size, and a plurality of motion prediction blocks are respectively obtained in the point cloud frame to be estimated and the reference motion compensation frame.
It should be noted that the preset prediction block size and the preset depth of the octree partition method may be selected according to actual needs, and are not limited herein.
S1062, determining the inter-block matching relationship between the point cloud frame to be estimated and the motion prediction block corresponding to each point cloud frame in the reference motion compensation frame, wherein the second matching relationship is formed by all the inter-block matching relationships.
It should be noted that the motion prediction block processed in this step is a non-empty block, i.e., a motion prediction block with a point.
S1063, constructing an inter-block rate distortion function between the corresponding motion prediction blocks based on the inter-block matching relationship, wherein the inter-block rate distortion function is formed by all the inter-block rate distortion functions.
S1064, calculating a motion estimation vector which enables the rate distortion cost corresponding to the inter-block rate distortion function to be minimum, and taking the motion estimation vector as the inter-block point cloud motion of the motion prediction block between the point cloud frame to be estimated and the reference point cloud frame.
Here, the inter-block point cloud motion may reflect a local point cloud motion between the point cloud frame to be estimated and the reference point cloud frame, and the motion is based on the point cloud frame to be estimated.
In a specific implementation, the method for calculating a motion estimation vector that minimizes a rate distortion cost corresponding to an inter-block rate distortion function may include:
step one, setting a search range and an initial search step length, taking the geometric position of the lower left corner of the current motion prediction block as an initial search point, traversing 1 block with the same position and 18 coplanar and collinear blocks in the search range of the reference point cloud frame, respectively calculating the rate distortion cost of the 19 search blocks, and selecting a point with the minimum cost as the initial position of the next iteration. And if the result of the search optimization is still the initial search point, entering a step three, and otherwise, entering a step two.
And step two, updating the search starting points to two points with the minimum cost in the search results of the step one, keeping the search step unchanged, continuously traversing 19 search blocks in the search range of the reference point cloud frame, calculating corresponding rate distortion cost, and selecting the point with the minimum cost as the starting position of the next iteration. And if the result of the search optimization is still the initial search point, entering the step three, otherwise, entering the step two.
And step three, updating the search starting point to two points (including the original starting search point) with the minimum cost in the search result of the step one, reducing the search step length by half on the premise of not reaching the motion precision, continuously traversing 19 search blocks in the search range of the reference point cloud frame, and calculating the corresponding rate-distortion cost. If the searching step does not reach the motion precision, continuing circulation, and judging whether the searching optimization result is still the initial searching point, if so, entering a third step, and if not, entering a second step; if the search step reaches the motion precision, the cycle stops, and the motion search corresponding to the motion prediction block is finished.
It should be noted that the number of the search blocks, the search range, and the search step may be selected according to actual needs, and the embodiments of the present application only provide exemplary references, and are not limited herein.
And S1065, combining all the inter-block point cloud motions to form point cloud motions between the point cloud frame to be estimated and the reference point cloud frame.
According to the estimation method of the point cloud motion, a point cloud frame to be estimated is obtained, and a reference point cloud frame corresponding to the point cloud frame to be estimated is determined; sampling a plurality of motion center points in a reference point cloud frame, and constructing a point cloud motion deformation graph reflecting the motion association relation between the motion center points based on the motion center points; determining a first matching relation between a reference point cloud frame and a point cloud frame to be estimated; according to the first matching relation and the point cloud motion deformation map, a point cloud motion energy equation between a reference point cloud frame and a point cloud frame to be estimated is constructed; iteratively solving a point cloud motion energy equation to determine a sparse non-rigid motion field corresponding to the motion center point; compensating for the sparse non-rigid motion field, and determining a fine non-rigid motion field corresponding to the reference point cloud frame; compensating the reference point cloud frame based on the fine non-rigid motion field to obtain a reference motion compensation frame corresponding to the reference point cloud frame; determining a second matching relation between the point cloud frame to be estimated and the reference motion compensation frame; based on the second matching relation, constructing an inter-frame rate distortion function between the point cloud frame to be estimated and the reference motion compensation frame; and iteratively estimating an interframe rate distortion function, and determining point cloud motion between the point cloud frame to be estimated and the reference point cloud frame. The estimation precision of the point cloud motion can be improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a device for estimating point cloud motion corresponding to the method for estimating point cloud motion, and since the principle of solving the problem of the device in the embodiment of the present disclosure is similar to that of the method for estimating point cloud motion in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 3, fig. 3 is a schematic diagram of an apparatus for estimating a point cloud motion according to an embodiment of the disclosure. As shown in fig. 3, an estimation apparatus 300 for point cloud motion provided by the embodiment of the present disclosure includes:
the obtaining module 310 is configured to obtain a point cloud frame to be estimated, and determine a reference point cloud frame corresponding to the point cloud frame to be estimated.
And a motion deformation map construction module 320, configured to sample a plurality of motion center points in the reference point cloud frame, and construct a point cloud motion deformation map reflecting a motion association relationship between the motion center points based on the motion center points.
An energy equation constructing module 330, configured to determine a first matching relationship between the reference point cloud frame and the point cloud frame to be estimated; and constructing a point cloud motion energy equation between the reference point cloud frame and the point cloud frame to be estimated according to the first matching relation and the point cloud motion deformation map.
A non-rigid motion field determining module 340, configured to iteratively solve the point cloud motion energy equation to determine a sparse non-rigid motion field corresponding to the motion center point; and compensating the sparse non-rigid motion field, and determining a fine non-rigid motion field corresponding to the reference point cloud frame.
A reference frame motion compensation module 350, configured to compensate the reference point cloud frame based on the fine non-rigid motion field to obtain a reference motion compensation frame corresponding to the reference point cloud frame.
A local motion estimation module 360, configured to determine a second matching relationship between the point cloud frame to be estimated and the reference motion compensation frame; constructing an inter-frame rate distortion function between the point cloud frame to be estimated and the reference motion compensation frame based on the second matching relation; and iteratively estimating the inter-frame rate distortion function, and determining the point cloud motion between the point cloud frame to be estimated and the reference point cloud frame.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
According to the estimation device for point cloud motion provided by the embodiment of the disclosure, a reference point cloud frame corresponding to a point cloud frame to be estimated is determined by acquiring the point cloud frame to be estimated; sampling a plurality of motion center points in a reference point cloud frame, and constructing a point cloud motion deformation graph reflecting the motion association relation between the motion center points based on the motion center points; determining a first matching relation between a reference point cloud frame and a point cloud frame to be estimated; according to the first matching relation and the point cloud motion deformation map, a point cloud motion energy equation between a reference point cloud frame and a point cloud frame to be estimated is constructed; iteratively solving a point cloud motion energy equation to determine a sparse non-rigid motion field corresponding to the motion center point; interpolating the sparse non-rigid motion field, and determining a fine non-rigid motion field corresponding to the reference point cloud frame; compensating the reference point cloud frame based on the fine non-rigid motion field to obtain a reference motion compensation frame corresponding to the reference point cloud frame; determining a second matching relation between the point cloud frame to be estimated and the reference motion compensation frame; based on the second matching relation, constructing an inter-frame rate distortion function between the point cloud frame to be estimated and the reference motion compensation frame; and iteratively estimating an interframe rate distortion function, and determining point cloud motion between the point cloud frame to be estimated and the reference point cloud frame. The estimation precision of the point cloud motion can be improved.
Corresponding to the method for estimating the point cloud motion in fig. 1, an embodiment of the present disclosure further provides an electronic device 400, and as shown in fig. 4, a schematic structural diagram of the electronic device 400 provided in the embodiment of the present disclosure includes:
a processor 41, a memory 42, and a bus 43; the memory 42 is used for storing execution instructions and includes a memory 421 and an external memory 422; the memory 421 is also referred to as an internal memory, and is used for temporarily storing the operation data in the processor 41 and the data exchanged with the external memory 422 such as a hard disk, the processor 41 exchanges data with the external memory 422 through the memory 421, and when the electronic device 400 operates, the processor 41 communicates with the memory 42 through the bus 43, so that the processor 41 executes the steps of the method for estimating the point cloud motion in fig. 1.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the method for estimating a point cloud motion in the above-mentioned method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product includes computer instructions, and the computer instructions, when executed by a processor, may perform the steps of the method for estimating a point cloud motion in the above method embodiments.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK) or the like.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and details are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: those skilled in the art can still make modifications or changes to the embodiments described in the foregoing embodiments, or make equivalent substitutions for some of the technical features, within the technical scope of the disclosure; such modifications, changes and substitutions do not depart from the spirit and scope of the embodiments disclosed herein, and they should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method of estimating a point cloud motion, the method comprising:
acquiring a point cloud frame to be estimated, and determining a reference point cloud frame corresponding to the point cloud frame to be estimated;
sampling a plurality of motion center points in the reference point cloud frame, and constructing a point cloud motion deformation graph reflecting the motion association relation between the motion center points based on the motion center points;
determining a first matching relation between the reference point cloud frame and the point cloud frame to be estimated; according to the first matching relation and the point cloud motion deformation map, constructing a point cloud motion energy equation between the reference point cloud frame and the point cloud frame to be estimated;
iteratively solving the point cloud motion energy equation to determine a sparse non-rigid motion field corresponding to the motion center point; interpolating the sparse non-rigid motion field, and determining a fine non-rigid motion field corresponding to the reference point cloud frame;
compensating the reference point cloud frame based on the fine non-rigid motion field to obtain a reference motion compensation frame corresponding to the reference point cloud frame;
determining a second matching relationship between the point cloud frame to be estimated and the reference motion compensation frame; constructing an inter-frame rate distortion function between the point cloud frame to be estimated and the reference motion compensation frame based on the second matching relation; and iteratively estimating the inter-frame rate distortion function, and determining the point cloud motion between the point cloud frame to be estimated and the reference point cloud frame.
2. The method according to claim 1, wherein the sampling a plurality of motion center points in the reference point cloud frame, and constructing a point cloud motion deformation map reflecting a motion correlation relationship between the motion center points based on the motion center points comprises:
selecting a target coordinate axis with the largest geometric distribution variance from the reference point cloud frame, wherein the target coordinate axis is formed by connecting points in the reference point cloud frame;
sampling a plurality of motion center points on the target coordinate axis according to a preset sampling step length;
dividing the reference point cloud frame into a plurality of point cloud subsets by taking the motion center point as a center according to a preset sampling radius;
traversing all the motion center points, and determining whether an intersection point exists between the point cloud subsets corresponding to every two motion center points; if yes, connecting the motion center points to form the point cloud motion deformation graph.
3. The method of claim 1, wherein the determining a first matching relationship between the reference point cloud frame and the point cloud frame to be estimated; according to the first matching relationship and the point cloud motion deformation map, a point cloud motion energy equation between the reference point cloud frame and the point cloud frame to be estimated is constructed, and the method specifically comprises the following steps:
aiming at each point in the reference point cloud frame, performing motion compensation on the point according to the corresponding motion center point, and determining a motion compensation point corresponding to the point;
determining the point matching relationship between the motion compensation point and the corresponding point in the point cloud frame to be estimated, and forming the first matching relationship by all the point matching relationships;
according to the first matching relation, constructing a geometric distortion item reflecting geometric distortion between the reference point cloud frame and the point cloud frame to be estimated;
determining a motion association relation between the motion central points according to the point cloud motion deformation graph, and constructing a motion difference item reflecting motion difference between the motion central points according to the motion association relation;
configuring a corresponding motion rigid constraint item for each motion central point;
and constructing the motion energy equation based on the geometric distortion term, the motion difference term and the motion rigidity constraint term.
4. The method of claim 3,
the point matching relationship comprises a forward matching relationship and a backward matching relationship, wherein the forward matching relationship represents the point matching relationship from the reference point cloud frame to the point cloud frame to be estimated; and the backward matching relation represents the point-to-point matching relation from the point cloud frame to be estimated to the reference point cloud frame.
5. The method of claim 1, wherein the iterative solution of the point cloud energy of motion equations determines a sparse non-rigid motion field corresponding to the center point of motion; compensating the sparse non-rigid motion field, and determining a fine non-rigid motion field corresponding to the reference point cloud frame, specifically comprising:
iteratively solving the point cloud motion energy equation, determining a target non-rigid motion field minimizing the point cloud motion energy equation, and taking the target non-rigid motion field as a sparse non-rigid motion field corresponding to the motion center point;
processing the sparse non-rigid motion field by an interpolation method to estimate motion vectors corresponding to other points except the motion center point in the reference point cloud frame;
and combining the motion vector with the sparse non-rigid motion field to determine a fine non-rigid motion field corresponding to the reference point cloud frame.
6. The method of claim 1, wherein determining a second matching relationship between the point cloud frame to be estimated and the reference motion compensated frame; constructing an inter-frame rate distortion function between the point cloud frame to be estimated and the reference motion compensation frame based on the second matching relation; iteratively estimating the inter-frame rate distortion function, and determining the point cloud motion between the point cloud frame to be estimated and the reference point cloud frame, specifically including:
respectively dividing the point cloud frame to be estimated and the reference motion compensation frame into a plurality of motion prediction blocks in the same dividing mode;
determining the inter-block matching relationship between the point cloud frame to be estimated and the corresponding motion prediction block in the reference motion compensation frame, wherein the second matching relationship is formed by all the inter-block matching relationships;
constructing an inter-block rate distortion function between the corresponding motion prediction blocks based on the inter-block matching relationship, wherein the inter-block rate distortion function is formed by all the inter-block rate distortion functions;
calculating a motion estimation vector which enables the rate distortion cost corresponding to the inter-block rate distortion function to be minimum, and taking the motion estimation vector as the motion prediction block to perform inter-block point cloud motion between the point cloud frame to be estimated and the reference point cloud frame;
and combining all the inter-block point cloud motions to form the point cloud motion between the point cloud frame to be estimated and the reference point cloud frame.
7. An apparatus for estimating a point cloud motion, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a point cloud frame to be estimated and determining a reference point cloud frame corresponding to the point cloud frame to be estimated;
the motion deformation map building module is used for sampling a plurality of motion center points in the reference point cloud frame and building a point cloud motion deformation map reflecting the motion association relation among the motion center points on the basis of the motion center points;
the energy equation building module is used for determining a first matching relation between the reference point cloud frame and the point cloud frame to be estimated; according to the first matching relation and the point cloud motion deformation map, constructing a point cloud motion energy equation between the reference point cloud frame and the point cloud frame to be estimated;
the non-rigid motion field determining module is used for solving the point cloud motion energy equation in an iteration mode so as to determine a sparse non-rigid motion field corresponding to the motion center point; interpolating the sparse non-rigid motion field, and determining a fine non-rigid motion field corresponding to the reference point cloud frame;
a reference frame motion compensation module, configured to compensate the reference point cloud frame based on the fine non-rigid motion field to obtain a reference motion compensation frame corresponding to the reference point cloud frame;
the local motion estimation module is used for determining a second matching relation between the point cloud frame to be estimated and the reference motion compensation frame; constructing an inter-frame rate distortion function between the point cloud frame to be estimated and the reference motion compensation frame based on the second matching relation; and iteratively estimating the inter-frame rate distortion function, and determining the point cloud motion between the point cloud frame to be estimated and the reference point cloud frame.
8. The apparatus of claim 7, wherein the energy equation building block is specifically configured to:
aiming at each point in the reference point cloud frame, performing motion compensation on the point according to the corresponding motion center point, and determining a motion compensation point corresponding to the point;
determining the point matching relationship between the motion compensation point and the corresponding point in the point cloud frame to be estimated, and forming the first matching relationship by all the point matching relationships;
according to the first matching relation, constructing a geometric distortion item reflecting geometric distortion between the reference point cloud frame and the point cloud frame to be estimated;
determining a motion association relation between the motion central points according to the point cloud motion deformation graph, and constructing a motion difference item reflecting motion difference between the motion central points according to the motion association relation;
configuring a corresponding motion rigid constraint item for each motion central point;
and constructing the motion energy equation based on the geometric distortion term, the motion difference term and the motion rigidity constraint term.
9. An electronic device, comprising: processor, memory and bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the method of estimating a point cloud motion of any of claims 1 to 6.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, performs the steps of the method for estimating a point cloud motion of any one of claims 1 to 6.
CN202210806269.3A 2022-07-08 2022-07-08 Point cloud motion estimation method and device, electronic equipment and storage medium Pending CN115082512A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210806269.3A CN115082512A (en) 2022-07-08 2022-07-08 Point cloud motion estimation method and device, electronic equipment and storage medium
PCT/CN2022/136842 WO2024007523A1 (en) 2022-07-08 2022-12-06 Point cloud motion estimation method and apparatus, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210806269.3A CN115082512A (en) 2022-07-08 2022-07-08 Point cloud motion estimation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115082512A true CN115082512A (en) 2022-09-20

Family

ID=83259806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210806269.3A Pending CN115082512A (en) 2022-07-08 2022-07-08 Point cloud motion estimation method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN115082512A (en)
WO (1) WO2024007523A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024007523A1 (en) * 2022-07-08 2024-01-11 北京大学深圳研究生院 Point cloud motion estimation method and apparatus, electronic device and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9235928B2 (en) * 2012-01-24 2016-01-12 University Of Southern California 3D body modeling, from a single or multiple 3D cameras, in the presence of motion
CN108711185B (en) * 2018-05-15 2021-05-28 清华大学 Three-dimensional reconstruction method and device combining rigid motion and non-rigid deformation
MX2021003841A (en) * 2018-10-02 2021-12-10 Huawei Tech Co Ltd Motion estimation using 3d auxiliary data.
CN113538667B (en) * 2021-09-17 2021-12-24 清华大学 Dynamic scene light field reconstruction method and device
CN115082512A (en) * 2022-07-08 2022-09-20 北京大学深圳研究生院 Point cloud motion estimation method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024007523A1 (en) * 2022-07-08 2024-01-11 北京大学深圳研究生院 Point cloud motion estimation method and apparatus, electronic device and storage medium

Also Published As

Publication number Publication date
WO2024007523A1 (en) 2024-01-11

Similar Documents

Publication Publication Date Title
CN111325851B (en) Image processing method and device, electronic equipment and computer readable storage medium
Süßmuth et al. Reconstructing animated meshes from time‐varying point clouds
WO2018009473A1 (en) Motion capture and character synthesis
KR20230127313A (en) 3D reconstruction and related interactions, measurement methods and related devices and devices
WO2022021309A1 (en) Method and apparatus for establishing model, electronic device, and computer readable storage medium
Weingarten et al. A fast and robust 3D feature extraction algorithm for structured environment reconstruction
GB2520613A (en) Target region fill utilizing transformations
AU2014203124A1 (en) Stereo-motion method of three-dimensional (3-D) structure information extraction from a video for fusion with 3-D point cloud data
AU2008200277A1 (en) Methodology for 3D scene reconstruction from 2D image sequences
CN110276768B (en) Image segmentation method, image segmentation device, image segmentation apparatus, and medium
CN106952342B (en) Point cloud based on center of gravity Voronoi subdivision uniforms method
WO2020062472A1 (en) Processing method for triangular mesh model, processing terminal, and storage medium
CN113706713A (en) Live-action three-dimensional model cutting method and device and computer equipment
CN115082512A (en) Point cloud motion estimation method and device, electronic equipment and storage medium
CN110889349A (en) VSLAM-based visual positioning method for sparse three-dimensional point cloud chart
WO2023024393A1 (en) Depth estimation method and apparatus, computer device, and storage medium
CN113706587B (en) Rapid point cloud registration method, device and equipment based on space grid division
JP2023540577A (en) Video transfer method, device, equipment, storage medium, and computer program
CN115222889A (en) 3D reconstruction method and device based on multi-view image and related equipment
CN114785998A (en) Point cloud compression method and device, electronic equipment and storage medium
CN112712044B (en) Face tracking method and device, electronic equipment and storage medium
CN110706332B (en) Scene reconstruction method based on noise point cloud
EP4083924A1 (en) Nearest neighbor search method, apparatus, device, and storage medium
CN116758219A (en) Region-aware multi-view stereo matching three-dimensional reconstruction method based on neural network
CN114782564B (en) Point cloud compression method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination