CN115994849A - Three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling - Google Patents

Three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling Download PDF

Info

Publication number
CN115994849A
CN115994849A CN202211299642.7A CN202211299642A CN115994849A CN 115994849 A CN115994849 A CN 115994849A CN 202211299642 A CN202211299642 A CN 202211299642A CN 115994849 A CN115994849 A CN 115994849A
Authority
CN
China
Prior art keywords
point cloud
feature
dense
module
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211299642.7A
Other languages
Chinese (zh)
Other versions
CN115994849B (en
Inventor
魏明强
马梦姣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202211299642.7A priority Critical patent/CN115994849B/en
Publication of CN115994849A publication Critical patent/CN115994849A/en
Application granted granted Critical
Publication of CN115994849B publication Critical patent/CN115994849B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling, which comprises the following steps: scanning the surface of an object by using a three-dimensional laser scanner to obtain a sparse point cloud model, inputting the sparse point cloud model into a point cloud feature extraction module, extracting a point cloud feature matrix, inputting the sparse point cloud feature matrix into a point cloud feature expansion module for channel expansion to obtain a dense point cloud feature matrix, estimating the normal vector of the dense point cloud by a PCA principal component analysis method through the point cloud corresponding to the three-dimensional coordinate reconstruction module, carrying out poisson reconstruction on the dense point cloud with a normal vector to obtain a high-quality triangular grid model, constructing a Laplace equation, rearranging the vertex sequence of the three-dimensional grid model according to the decomposition result of the Laplace equation, embedding digital watermarks according to the vertex sequence from low frequency to high frequency, and extracting watermark sequences according to the embedding sequence. The method solves the problems of multi-scale feature deletion, partial detail deficiency and poor robustness of the three-dimensional model digital watermark in the point cloud up-sampling.

Description

Three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling
Technical Field
The invention relates to the technical field of up-sampling and digital watermarking of a three-dimensional point cloud model, in particular to a three-dimensional digital watermarking embedding and extracting method based on up-sampling of point cloud.
Background
With the rapid development and popularization of three-dimensional scanning equipment, three-dimensional data becomes an important information acquisition and presentation mode in industrial production and daily life. The 3D point cloud data is a research hot spot in the field of computer three-dimensional vision due to the characteristics of flexible form and easy acquisition. But limited by hardware equipment and acquisition environment interference, original point cloud data directly acquired by a sensor is usually sparse and incomplete, and cannot accurately represent geometric characteristics of a three-dimensional model, so that higher-quality point cloud data is acquired through a point cloud up-sampling technology, and the acquisition method is very necessary for downstream point cloud segmentation, identification and other tasks. In addition, the three-dimensional digital model is widely spread in the ways of the Internet and the like, and how to effectively protect the copyright of the three-dimensional model from being affected by infringement becomes a hot problem to be solved urgently. Digital watermarking technology plays an important role in geospatial data copyright protection, and three-dimensional digital watermarking becomes a technology of point cloud data copyright protection which is primarily considered to be applied. Therefore, it is significant to research how to obtain high-quality point cloud by up-sampling based on deep learning and ensure the security and copyright attribution through digital watermarking.
In the existing research work, the point cloud up-sampling network generally has the problems that the multi-scale characteristics of the point cloud cannot be well extracted and the local detail is lacking, but the digital watermarking algorithm of the three-dimensional point cloud has the problems that the embedding amount is small and the robustness to various attacks is poor due to the characteristic that the point cloud data lacks a regular topological structure, and cannot meet the requirement of the three-dimensional watermarking. The graph convolution can process non-euclidean data by constructing graph structure and aggregating graph information, and is suitable for solving the point cloud processing problem due to disorder and irregularity.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides a three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling, which is based on graph convolution for up-sampling and solves the problems of multi-scale feature deletion, partial detail deficiency and the like in the point cloud up-sampling; and implementing digital watermarking based on frequency domain on the grid model reconstructed by dense point cloud, thereby solving the problem of poor robustness of the digital watermarking of the three-dimensional model.
In order to achieve the technical purpose, the invention adopts the following technical scheme: a three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling specifically comprises the following steps:
step 1, scanning the surface of an object by using a three-dimensional laser scanner to obtain a three-dimensional point cloud of the surface of the object, and obtaining a sparse point cloud model;
step 2, constructing a point cloud up-sampling network based on graph convolution, wherein the point cloud up-sampling network consists of a point cloud feature extraction module, a point cloud feature expansion module and a three-dimensional coordinate reconstruction module;
step 3, inputting the sparse point cloud model into a point cloud feature extraction module to extract a point cloud feature matrix;
step 4, inputting the point cloud feature matrix into a point cloud feature expansion module for channel expansion to obtain a dense point cloud feature matrix;
step 5, the dense point cloud feature matrix corresponds to the point cloud in the coordinate space through a three-dimensional coordinate reconstruction module, and point cloud up-sampling from sparse point cloud to dense point cloud is completed;
step 6, estimating normal vectors of the dense point clouds through a PCA principal component analysis method, and performing poisson reconstruction on the dense point clouds with the normal vectors to obtain a high-quality triangular grid model corresponding to the sparse point cloud model;
and 7, constructing a Laplace equation according to the structure of the three-dimensional grid model, rearranging the vertex sequence of the three-dimensional grid model according to the decomposition result of the Laplace equation, embedding digital watermarks according to the vertex sequence from low frequency to high frequency, and extracting watermark sequences according to the embedding sequence.
Further, the point cloud feature extraction module is formed by sequentially connecting a KNN, a graph convolution neural network and a feature extractor, and the feature extractor comprises: the system comprises a first bottleneck layer, a first dense graph convolution module A, a second dense graph convolution module B and a global pooling module, wherein the first bottleneck layer is respectively connected with the input end of the first dense graph convolution module A, the input end of the second dense graph convolution module B and the input end of the global pooling module through three parallel branches, and the output end of the first dense graph convolution module A, the output end of the second dense graph convolution module B and the output end of the global pooling module are combined and are in jump connection with a preliminary feature graph obtained by a graph convolution neural network.
Further, step 3 comprises the following sub-steps:
step 3.1, constructing a graph of the sparse point cloud model through KNN, and inputting the graph into a graph convolutional neural network to perform one-layer graph convolution to obtain a preliminary feature graph;
step 3.2, inputting the preliminary feature map into a first bottleneck layer in a feature extractor for compression to obtain compression features;
step 3.3, inputting the compression characteristics into the first dense graph convolution module A, the second dense graph convolution module B and the global pooling module in parallel through three parallel branches, merging the three parallel branches, and outputting a point cloud multi-scale characteristic graph;
and 3.4, jumping and linking the point cloud multi-scale feature map with the preliminary feature map, and outputting a point cloud optimization feature matrix with the size of N multiplied by C.
Further, the hole convolution rate of the first dense graph convolution module a is set to 1, and the hole convolution rate of the second dense graph convolution module B is set to 2.
Further, the point cloud characteristic expansion module is formed by sequentially connecting a second bottleneck layer, an up-sampling module and a characteristic compression layer.
Further, step 4 comprises the following sub-steps:
step 4.1, inputting the point cloud optimization feature matrix with the size of N multiplied by C into a second bottleneck layer to obtain a compressed sparse feature matrix;
step 4.2, inputting the compressed sparse feature matrix into an up-sampling module, and carrying out one-layer graph convolution by the up-sampling module to generate a dense feature matrix:
Figure BDA0003903987500000033
wherein ,
Figure BDA0003903987500000036
point cloud optimization feature matrix corresponding to layer I diagram is represented by +.>
Figure BDA0003903987500000034
b l+1 All represent learning parameters of the layer 1 (1) diagram for channel expansion, < ->
Figure BDA0003903987500000035
To convert a matrix of size NxrC into a matrix of size rN xC, r is the upsampling rate,/v>
Figure BDA0003903987500000037
A dense feature matrix that is a layer 1 layer diagram;
step 4.3 compress the dense feature matrix into a dense point cloud feature matrix of size rN x C' by the feature compression layer using two sets of MLPs.
Further, in step 5, the three-dimensional coordinate reconstruction module is formed by MLP, and the dense point cloud feature matrix with the size of rn×c' is corresponding to the point cloud with the point number of rn×3 in the coordinate space, so as to complete up-sampling of the point cloud from sparse point cloud to dense point cloud.
Further, the top point number of the high-quality triangle mesh model M is rN.
Further, step 7 comprises the following sub-steps:
step 7.1, constructing an undirected graph with the number of top points rN according to the top points and the surface patch attributes of the high-quality triangular mesh model M, and calculating a Laplace matrix L=D-A, wherein D is a degree matrix of the graph, and A is an adjacent matrix of the graph;
step 7.2, carrying out eigenvalue decomposition on the Laplace matrix to obtain rN eigenvalues and eigenvectors w i (1. Ltoreq.i. Ltoreq.rN), and the eigenvector w i Normalizing to obtain unit feature vector e i (1≤i≤rN);
Step 7.3, projecting the vertex coordinates of the high-quality triangle mesh model onto the unit feature vector to generate rN mesh spectrum coefficient vectors r i =(r s,i ,r t,i ,r u,i ) (1. Ltoreq.i.ltoreq.rN), where r s,i For the vector component of the grid spectrum coefficient on the s coordinate axis in the grid spectrum domain, r t,i For the vector component of the grid spectrum coefficient on the t coordinate axis in the grid spectrum domain, r u,i For grids on the u coordinate axis in the grid spectrum domainSpectral coefficient vector components;
step 7.4, rearranging the vertex sequence of the three-dimensional grid model, and arranging the watermark data a= (a) according to the sequence from small to large characteristic values 1 ,…a j ,…,a m ) Embedded into the grid spectral coefficient vector:
Figure BDA0003903987500000031
wherein ,rf,i The method comprises the steps that the sparse vector component of the grid spectrum corresponding to the f coordinate axis in the grid spectrum domain is obtained, and f is any one of s, t and u coordinate axes in the grid spectrum domain; p is p i E { -1,1} is a known generated pseudo-random number sequence; alpha is the modulation amplitude, alpha>0;b′ i As a sequence coefficient related to the chip rate,
Figure BDA0003903987500000032
b is a sequence related to the chip rate, b= (b) 1 ,…b i ,…b mc ),b i =a j ,j·c≤i<(j+1). C, c is the chip rate, b i E {0,1}, j is a random variable; />
Figure BDA0003903987500000044
Embedding watermark data into a grid spectrum coefficient vector on the f coordinate axis in the grid spectrum domain;
step 7.5, reversely transforming the grid spectrum coefficient vector embedded with the watermark data back to the vertex coordinate domain, and reconstructing a triangle grid model M' embedded with the watermark:
Figure BDA0003903987500000041
Figure BDA0003903987500000042
Figure BDA0003903987500000043
step 7.6, solving a Laplacian matrix of the triangle mesh model M' after watermark embedding, and constructing a minimum heap according to key value pairs and characteristic value sequences of vertex indexes;
step 7.7, each time, a value is taken out from the minimum pile, vertex indexes corresponding to the minimum characteristic values are allocated, watermark information is extracted according to the sequence from the small characteristic values to the large characteristic values, continuous 3 bits corresponding to the indexes are reallocated to corresponding positions of a new watermark sequence matrix according to the raster scanning sequence;
and 7.8, converting the watermark sequence matrix into a one-dimensional watermark sequence according to a raster sequence after the execution is finished, and extracting corresponding watermark information.
Compared with the prior art, the invention has the beneficial effects that: according to the method, a point cloud up-sampling network based on dynamic graph convolution is constructed, non-European data can be flexibly processed through graph convolution, local semantics are learned, local characteristics and geometric structures are reserved while the point cloud points are expanded, a high-quality point cloud up-sampling result is obtained, and the information embedding amount of the three-dimensional model watermark is effectively expanded; meanwhile, the invention improves the three-dimensional digital watermarking algorithm based on the frequency domain, and effectively enhances the robustness of the watermark to smooth attack by changing the embedding sequence of watermark information.
Drawings
FIG. 1 is a flow chart of a three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling;
FIG. 2 is a graph-roll neural network-based point cloud upsampling block diagram;
FIG. 3 is a schematic diagram of a point cloud feature extractor architecture;
fig. 4 is a flow chart of frequency domain based three-dimensional grid digital watermark embedding and extraction.
Detailed Description
The technical scheme of the invention is further explained below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling, which specifically includes the following steps:
step 1, scanning the surface of an object by using a three-dimensional laser scanner to obtain a three-dimensional point cloud of the surface of the object, and obtaining a sparse point cloud model;
and 2, constructing a point cloud up-sampling network based on graph convolution, as shown in fig. 2, wherein the point cloud up-sampling network comprises a point cloud feature extraction module, a point cloud feature expansion module and a three-dimensional coordinate reconstruction module, non-European data can be flexibly processed through graph convolution, local semantics are learned, local features and geometric structures are reserved while point cloud points are expanded, a high-quality point cloud up-sampling result is obtained, and the information embedding amount of the three-dimensional model watermark is effectively expanded.
The cloud feature extraction module in the invention is formed by sequentially connecting KNN, a graph convolution neural network and a feature extractor, as shown in fig. 3, the feature extractor comprises: the system comprises a first bottleneck layer, a first dense graph convolution module A, a second dense graph convolution module B and a global pooling module, wherein the first bottleneck layer is respectively connected with the input end of the first dense graph convolution module A, the input end of the second dense graph convolution module B and the input end of the global pooling module through three parallel branches, the output end of the first dense graph convolution module A, the output end of the second dense graph convolution module B and the output end of the global pooling module are combined and then connected with the output end of a graph convolution neural network through jump connection. The first dense graph convolution module A and the second dense graph convolution module B both adopt a dense connection mechanism to mutually connect all layers, so that the efficient reuse of the characteristics among the layers is realized, and a layer 1 (i+1) graph is obtained:
Figure BDA0003903987500000051
wherein ,
Figure BDA0003903987500000052
representing a compressed feature map, < >>
Figure BDA0003903987500000053
Representing a layer I, ->
Figure BDA0003903987500000054
Represents layer 1, layer 1->
Figure BDA0003903987500000055
Information extracted from the vertex neighborhood of the compressed feature map for the aggregation function, ++>
Figure BDA0003903987500000056
For the information extracted from the vertex neighborhood of the first layer graph in the aggregation function, ++>
Figure BDA0003903987500000057
Input diagram for connection->
Figure BDA0003903987500000058
And all intermediate output layers, < > and->
Figure BDA0003903987500000059
A graph convolution operation is represented, i.e., information is extracted from the vertex neighborhood and the vertex information is updated.
Step 3, inputting the sparse point cloud model into a point cloud feature extraction module to extract a point cloud feature matrix; the method specifically comprises the following substeps:
step 3.1, constructing a graph of the sparse point cloud model through KNN, and inputting the graph into a graph convolutional neural network to perform one-layer graph convolution to obtain a preliminary feature graph;
step 3.2, inputting the preliminary feature map into a first bottleneck layer in a feature extractor for compression to obtain compression features so as to reduce the calculated amount;
step 3.3, inputting the compression characteristics into the first dense graph convolution module A, the second dense graph convolution module B and the global pooling module in parallel through three parallel branches, merging the three parallel branches, and outputting a point cloud multi-scale characteristic graph;
and 3.4, jumping and linking the point cloud multi-scale feature map with the preliminary feature map, and outputting a point cloud optimization feature matrix with the size of N multiplied by C.
Step 4, inputting the point cloud feature matrix into a point cloud feature expansion module for channel expansion to obtain a dense point cloud feature matrix, wherein the point cloud feature expansion module is formed by sequentially connecting a second bottleneck layer, an up-sampling module and a feature compression layer; the method specifically comprises the following substeps:
step 4.1, inputting the point cloud optimization feature matrix with the size of N multiplied by C into a second bottleneck layer to obtain a compressed sparse feature matrix;
step 4.2, inputting the compressed sparse feature matrix into an up-sampling module, and carrying out one-layer graph convolution by the up-sampling module to generate a dense feature matrix:
Figure BDA0003903987500000061
wherein ,
Figure BDA0003903987500000062
point cloud optimization feature matrix corresponding to layer I diagram is represented by +.>
Figure BDA0003903987500000063
b l+1 All representing learning parameters of layer 1 +.>
Figure BDA0003903987500000064
To convert a matrix of size NxrC into a matrix of size rN xC, r is the upsampling rate,/v>
Figure BDA0003903987500000065
A dense feature matrix that is a layer 1 layer diagram;
step 4.3 compress the dense feature matrix into a dense point cloud feature matrix of size rN x C' by the feature compression layer using two sets of MLPs.
And 5, the dense point cloud feature matrix corresponds to the point cloud in the coordinate space through a three-dimensional coordinate reconstruction module, so that point cloud up-sampling from sparse point cloud to dense point cloud is completed, specifically, the three-dimensional coordinate reconstruction module consists of MLP, the dense point cloud feature matrix with the size of rN multiplied by C' corresponds to the point cloud with the number of points of rN multiplied by 3 in the coordinate space, and point cloud up-sampling from sparse point cloud to dense point cloud is completed. The graph convolution neural network provided by the invention can well extract multi-scale point cloud characteristics, and rich local details and fine geometric structures are reserved when the point cloud characteristics are expanded.
And 6, estimating normal vectors of the dense point clouds by a PCA principal component analysis method, and carrying out poisson reconstruction on the dense point clouds with the normal vectors to obtain a high-quality triangular grid model corresponding to the sparse point cloud model, wherein the top point number of the high-quality triangular grid model M is rN.
And 7, constructing a Laplace equation according to the structure of the three-dimensional grid model, rearranging the vertex sequence of the three-dimensional grid model according to the decomposition result of the Laplace equation, embedding digital watermarks according to the vertex sequence from low frequency to high frequency, extracting watermark sequences according to the embedding sequence, and preprocessing up-sampling and changing the embedding sequence of watermark information. As shown in fig. 4, the method specifically comprises the following substeps:
step 7.1, constructing an undirected graph with the number of top points rN according to the top points and the surface patch attributes of the high-quality triangular mesh model M, and calculating a Laplace matrix L=D-A, wherein D is a degree matrix of the graph, and A is an adjacent matrix of the graph;
step 7.2, carrying out eigenvalue decomposition on the Laplace matrix to obtain rN eigenvalues and eigenvectors w i (1. Ltoreq.i. Ltoreq.rN), and the eigenvector w i Normalizing to obtain unit feature vector e i (1≤i≤rN);
Step 7.3, projecting the vertex coordinates of the high-quality triangle mesh model onto the unit feature vector to generate rN mesh spectrum coefficient vectors r i =(r s,i ,r t,i ,r u,i ) (1. Ltoreq.i.ltoreq.rN), where r s,i For the vector component of the grid spectrum coefficient on the s coordinate axis in the grid spectrum domain, r t,i For the vector component of the grid spectrum coefficient on the t coordinate axis in the grid spectrum domain, r u,i The vector component is a grid spectrum coefficient vector component on a u coordinate axis in the grid spectrum domain;
step 7.4, rearranging the vertex sequence of the three-dimensional grid model, and arranging the watermark data a= (a) according to the sequence from small to large characteristic values 1 ,…a j ,…,a m ) Embedded into the grid spectral coefficient vector:
Figure BDA0003903987500000071
wherein ,rf,i The method comprises the steps of (1) obtaining a grid spectrum sparse vector component corresponding to an f coordinate axis in a grid spectrum domain; p is p i E { -1,1} is a known generated pseudo-random number sequence; alpha is the modulation amplitude, alpha>0;b′ i B 'is a sequence coefficient related to the chip rate' i =-1,b i =0;b′ i =1,b i =1, b is a sequence related to chip rate, b= (b) 1 ,…b i ,…b mc ),b i =a j ,j·c≤i<(j+1). C, c is the chip rate, b i E {0,1}, j is a random variable;
Figure BDA0003903987500000072
embedding watermark data into a grid spectrum coefficient vector on the f coordinate axis in the grid spectrum domain;
step 7.5, reversely transforming the grid spectrum coefficient vector embedded with the watermark data back to the vertex coordinate domain, and reconstructing a triangle grid model M' embedded with the watermark:
Figure BDA0003903987500000073
Figure BDA0003903987500000074
Figure BDA0003903987500000075
step 7.6, solving a Laplacian matrix of the triangle mesh model M' after watermark embedding, and constructing a minimum heap according to key value pairs and characteristic value sequences of vertex indexes;
step 7.7, each time, a value is taken out from the minimum pile, vertex indexes corresponding to the minimum characteristic values are allocated, watermark information is extracted according to the sequence from the small characteristic values to the large characteristic values, continuous 3 bits corresponding to the indexes are reallocated to corresponding positions of a new watermark sequence matrix according to the raster scanning sequence;
and 7.8, converting the watermark sequence matrix into a one-dimensional watermark sequence according to a raster sequence after the execution is finished, and extracting corresponding watermark information.
And respectively carrying out Taubin Smooth attack on the triangular mesh model reconstructed by the sparse point cloud model and the dense point cloud model, carrying out watermark attack modes such as similarity transformation, noise interference, smooth attack and the like, and checking the robustness of the digital watermark. The three-dimensional digital watermarking algorithm based on the frequency domain can resist smooth attack more effectively and keep the surface detail of the model because the number of vertexes and the number of face sheets of the three-dimensional grid model after up-sampling are increased, the information redundancy of the three-dimensional grid model is large, and the dislocation rate is reduced.
The above is only a preferred embodiment of the present invention, and the scope of the present invention is not limited to the above embodiment, and all technical solutions belonging to the concept of the present invention are within the scope of the present invention. It should be noted that modifications and adaptations to the invention without departing from the principles thereof are intended to be within the scope of the invention as set forth in the following claims.

Claims (9)

1. The three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling is characterized by comprising the following steps of:
step 1, scanning the surface of an object by using a three-dimensional laser scanner to obtain a three-dimensional point cloud of the surface of the object, and obtaining a sparse point cloud model;
step 2, constructing a point cloud up-sampling network based on graph convolution, wherein the point cloud up-sampling network consists of a point cloud feature extraction module, a point cloud feature expansion module and a three-dimensional coordinate reconstruction module;
step 3, inputting the sparse point cloud model into a point cloud feature extraction module to extract a point cloud feature matrix;
step 4, inputting the point cloud feature matrix into a point cloud feature expansion module for channel expansion to obtain a dense point cloud feature matrix;
step 5, the dense point cloud feature matrix corresponds to the point cloud in the coordinate space through a three-dimensional coordinate reconstruction module, and point cloud up-sampling from sparse point cloud to dense point cloud is completed;
step 6, estimating normal vectors of the dense point clouds through a PCA principal component analysis method, and performing poisson reconstruction on the dense point clouds with the normal vectors to obtain a high-quality triangular grid model corresponding to the sparse point cloud model;
and 7, constructing a Laplace equation according to the structure of the three-dimensional grid model, rearranging the vertex sequence of the three-dimensional grid model according to the decomposition result of the Laplace equation, embedding digital watermarks according to the vertex sequence from low frequency to high frequency, and extracting watermark sequences according to the embedding sequence.
2. The method for embedding and extracting a three-dimensional digital watermark based on point cloud upsampling according to claim 1, wherein the point cloud feature extraction module is formed by sequentially connecting KNN, a graph roll-up neural network and a feature extractor, and the feature extractor comprises: the system comprises a first bottleneck layer, a first dense graph convolution module A, a second dense graph convolution module B and a global pooling module, wherein the first bottleneck layer is respectively connected with the input end of the first dense graph convolution module A, the input end of the second dense graph convolution module B and the input end of the global pooling module through three parallel branches, and the output end of the first dense graph convolution module A, the output end of the second dense graph convolution module B and the output end of the global pooling module are combined and are in jump connection with a preliminary feature graph obtained by a graph convolution neural network.
3. The method for embedding and extracting a three-dimensional digital watermark based on point cloud upsampling according to claim 1, wherein step 3 comprises the following sub-steps:
step 3.1, constructing a graph of the sparse point cloud model through KNN, and inputting the graph into a graph convolutional neural network to perform one-layer graph convolution to obtain a preliminary feature graph;
step 3.2, inputting the preliminary feature map into a first bottleneck layer in a feature extractor for compression to obtain compression features;
step 3.3, inputting the compression characteristics into the first dense graph convolution module A, the second dense graph convolution module B and the global pooling module in parallel through three parallel branches, merging the three parallel branches, and outputting a point cloud multi-scale characteristic graph;
and 3.4, jumping and linking the point cloud multi-scale feature map with the preliminary feature map, and outputting a point cloud optimization feature matrix with the size of N multiplied by C.
4. The method for embedding and extracting a three-dimensional digital watermark based on point cloud upsampling according to claim 3, wherein the hole convolution rate of the first dense map convolution module a is set to 1, and the hole convolution rate of the second dense map convolution module B is set to 2.
5. The method for embedding and extracting the three-dimensional digital watermark based on the point cloud upsampling according to claim 1, wherein the point cloud feature expansion module is formed by sequentially connecting a second bottleneck layer, an upsampling module and a feature compression layer.
6. The method for embedding and extracting a three-dimensional digital watermark based on point cloud upsampling according to claim 1, wherein step 4 comprises the following sub-steps:
step 4.1, inputting the point cloud optimization feature matrix with the size of N multiplied by C into a second bottleneck layer to obtain a compressed sparse feature matrix;
step 4.2, inputting the compressed sparse feature matrix into an up-sampling module, and carrying out one-layer graph convolution by the up-sampling module to generate a dense feature matrix:
Figure FDA0003903987490000021
wherein ,
Figure FDA0003903987490000022
point cloud optimization feature matrix corresponding to layer I diagram is represented by +.>
Figure FDA0003903987490000023
b l+1 All represent learning parameters of the layer 1 (1) diagram for channel expansion, < ->
Figure FDA0003903987490000024
To convert a matrix of size N x rC to a matrix of size rN x C, r is the up-sampling rate,
Figure FDA0003903987490000025
a dense feature matrix that is a layer 1 layer diagram;
step 4.3 compress the dense feature matrix into a dense point cloud feature matrix of size rN x C' by the feature compression layer using two sets of MLPs.
7. The three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling according to claim 1, wherein in step 5, the three-dimensional coordinate reconstruction module consists of MLP, and the dense point cloud feature matrix with the size of rN x C' corresponds to the point cloud with the number of rN x 3 in the coordinate space, so as to complete the point cloud up-sampling from sparse point cloud to dense point cloud.
8. The method for embedding and extracting a three-dimensional digital watermark based on point cloud upsampling according to claim 1, wherein the number of top points of the high-quality triangular mesh model M is rN.
9. The method for embedding and extracting a three-dimensional digital watermark based on point cloud upsampling according to claim 1, wherein step 7 comprises the following sub-steps:
step 7.1, constructing an undirected graph with the number of top points rN according to the top points and the surface patch attributes of the high-quality triangular mesh model M, and calculating a Laplace matrix L=D-A, wherein D is a degree matrix of the graph, and A is an adjacent matrix of the graph;
step 7.2, carrying out eigenvalue decomposition on the Laplace matrix to obtain rN eigenvalues and eigenvectors w i (1. Ltoreq.i. Ltoreq.rN), and the eigenvector w i Normalizing to obtain unit feature vector e i (1≤i≤rN);
Step 7.3, projecting the vertex coordinates of the high-quality triangle mesh model onto the unit feature vector to generate rN mesh spectrum coefficient vectors r i =(r s,i ,r t,i ,r u,i ) (1. Ltoreq.i.ltoreq.rN), where r s,i For the vector component of the grid spectrum coefficient on the s coordinate axis in the grid spectrum domain, r t,i For the vector component of the grid spectrum coefficient on the t coordinate axis in the grid spectrum domain, r u,i The vector component is a grid spectrum coefficient vector component on a u coordinate axis in the grid spectrum domain;
step 7.4, rearranging the vertex sequence of the three-dimensional grid model, and arranging the watermark data a= (a) according to the sequence from small to large characteristic values 1 ,…a j ,…,a m ) Embedded into the grid spectral coefficient vector:
Figure FDA0003903987490000031
wherein ,rf,i The method comprises the steps that the sparse vector component of the grid spectrum corresponding to the f coordinate axis in the grid spectrum domain is obtained, and f is any one of s, t and u coordinate axes in the grid spectrum domain; p is p i E { -1,1} is a known generated pseudo-random number sequence; alpha is the modulation amplitude, alpha>0;b′ i As a sequence coefficient related to the chip rate,
Figure FDA0003903987490000032
b is a sequence related to the chip rate, b= (b) 1 ,…b i ,…b mc ),b i =a j ,j·c≤i<(j+1). C, c is the chip rate, b i E {0,1}, j is a random variable; />
Figure FDA0003903987490000033
Embedding watermark data into a grid spectrum coefficient vector on the f coordinate axis in the grid spectrum domain;
step 7.5, reversely transforming the grid spectrum coefficient vector embedded with the watermark data back to the vertex coordinate domain, and reconstructing a triangle grid model M' embedded with the watermark:
Figure FDA0003903987490000034
/>
Figure FDA0003903987490000035
Figure FDA0003903987490000036
step 7.6, solving a Laplacian matrix of the triangle mesh model M' after watermark embedding, and constructing a minimum heap according to key value pairs and characteristic value sequences of vertex indexes;
step 7.7, each time, a value is taken out from the minimum pile, vertex indexes corresponding to the minimum characteristic values are allocated, watermark information is extracted according to the sequence from the small characteristic values to the large characteristic values, continuous 3 bits corresponding to the indexes are reallocated to corresponding positions of a new watermark sequence matrix according to the raster scanning sequence;
and 7.8, converting the watermark sequence matrix into a one-dimensional watermark sequence according to a raster sequence after the execution is finished, and extracting corresponding watermark information.
CN202211299642.7A 2022-10-24 2022-10-24 Three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling Active CN115994849B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211299642.7A CN115994849B (en) 2022-10-24 2022-10-24 Three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211299642.7A CN115994849B (en) 2022-10-24 2022-10-24 Three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling

Publications (2)

Publication Number Publication Date
CN115994849A true CN115994849A (en) 2023-04-21
CN115994849B CN115994849B (en) 2024-01-09

Family

ID=85994372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211299642.7A Active CN115994849B (en) 2022-10-24 2022-10-24 Three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling

Country Status (1)

Country Link
CN (1) CN115994849B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824064A (en) * 2023-07-14 2023-09-29 湖南大学 Point cloud data model generation method and device, computing equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200394450A1 (en) * 2018-02-11 2020-12-17 Peking University Shenzhen Graduate School An enhanced graph transformation-based point cloud attribute compression method
KR102268676B1 (en) * 2019-12-27 2021-06-23 중앙대학교 산학협력단 3D Point Cloud Generative Adversarial Network Based on Tree Structured Graph Convolutions
LU500265B1 (en) * 2020-05-19 2021-11-19 Univ South China Tech A Method of Upsampling of Point Cloud Based on Deep Learning
CN113674403A (en) * 2021-08-26 2021-11-19 上海交通大学 Three-dimensional point cloud up-sampling method, system, equipment and medium
CN114373099A (en) * 2022-01-05 2022-04-19 上海交通大学 Three-dimensional point cloud classification method based on sparse graph convolution

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200394450A1 (en) * 2018-02-11 2020-12-17 Peking University Shenzhen Graduate School An enhanced graph transformation-based point cloud attribute compression method
KR102268676B1 (en) * 2019-12-27 2021-06-23 중앙대학교 산학협력단 3D Point Cloud Generative Adversarial Network Based on Tree Structured Graph Convolutions
LU500265B1 (en) * 2020-05-19 2021-11-19 Univ South China Tech A Method of Upsampling of Point Cloud Based on Deep Learning
CN113674403A (en) * 2021-08-26 2021-11-19 上海交通大学 Three-dimensional point cloud up-sampling method, system, equipment and medium
CN114373099A (en) * 2022-01-05 2022-04-19 上海交通大学 Three-dimensional point cloud classification method based on sparse graph convolution

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ERICSSON: "R1-141724 "BCH Enhancements – Turbo Coding vs Convolutional Coding"", 3GPP TSG_RAN\\WG1_RL1, no. 1 *
张新良;付鹏飞;赵运基;谢恒;王琬如;: "融合图卷积和差异性池化函数的点云数据分类分割模型", 中国图象图形学报, no. 06 *
朱威;绳荣金;汤如;何德峰;: "基于动态图卷积和空间金字塔池化的点云深度学习网络", 计算机科学, no. 07 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824064A (en) * 2023-07-14 2023-09-29 湖南大学 Point cloud data model generation method and device, computing equipment and storage medium

Also Published As

Publication number Publication date
CN115994849B (en) 2024-01-09

Similar Documents

Publication Publication Date Title
Liu et al. A robust and blind 3D watermarking algorithm using multiresolution adaptive parameterization of surface
Lai et al. Geometric texture synthesis and transfer via geometry images
CN115994849B (en) Three-dimensional digital watermark embedding and extracting method based on point cloud up-sampling
Kasem et al. Spatial transformer generative adversarial network for robust image super-resolution
CN102637304B (en) Method for synthesizing isotropic/anisotropic texture on geometric surface based on GPU (Graphics Processing Unit)
Zhang et al. Reversibility improved data hiding in 3D mesh models using prediction-error expansion and sorting
CN107977964A (en) Slit cropping evidence collecting method based on LBP and extension Markov feature
CN108876694B (en) Three-dimensional model blind digital watermarking algorithm based on Schur decomposition
CN116757909B (en) BIM data robust watermarking method, device and medium
Liu et al. A watermarking method for 3D models based on feature vertex localization
Lee et al. Mesh watermarking based projection onto two convex sets
CN104318505A (en) Three-dimensional mesh model blind watermarking method based on image discrete cosine transformation
CN116681844A (en) Building white film construction method based on sub-meter stereopair satellite images
Wang et al. Hierarchical blind watermarking of 3D triangular meshes
Lavoue et al. Subdivision surface watermarking
CN112862655B (en) JPEG image steganalysis method based on channel space attention mechanism
El-Seoud et al. Robust digital watermarking for compressed 3D models based on polygonal representation
Ke et al. A robust watermarking scheme for 3D point cloud models using self-similarity partition
Hamidi et al. Blind robust 3-d mesh watermarking based on mesh saliency and qim quantization for copyright protection
CN115222575B (en) Vector data watermark embedding and extracting method using frequency domain coefficient ratio
TWI771250B (en) Device and method for reducing data dimension, and operating method of device for converting data dimension
CN113706360B (en) 3D grid steganography method based on feature preserving distortion model
Dang et al. A patch-based non-local means method for image denoising
Denis et al. Digital watermarking of compressed 3d meshes
CN114612798B (en) Satellite image tampering detection method based on Flow model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant