CN113436237B - High-efficient measurement system of complicated curved surface based on gaussian process migration learning - Google Patents

High-efficient measurement system of complicated curved surface based on gaussian process migration learning Download PDF

Info

Publication number
CN113436237B
CN113436237B CN202110987333.8A CN202110987333A CN113436237B CN 113436237 B CN113436237 B CN 113436237B CN 202110987333 A CN202110987333 A CN 202110987333A CN 113436237 B CN113436237 B CN 113436237B
Authority
CN
China
Prior art keywords
data
module
error
point cloud
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110987333.8A
Other languages
Chinese (zh)
Other versions
CN113436237A (en
Inventor
孙立剑
曹卫强
王军
徐晓刚
虞舒敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202110987333.8A priority Critical patent/CN113436237B/en
Publication of CN113436237A publication Critical patent/CN113436237A/en
Application granted granted Critical
Publication of CN113436237B publication Critical patent/CN113436237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention relates to a complex curved surface high-efficiency measurement system based on Gaussian process transfer learning, which mainly aims at a 2.5D continuous curved surface with random and complex appearance, and utilizes a Gaussian process to operate test data in a low-dimensional hidden space due to the difference in distribution of a training data set and the test data so that the distribution of the test data approaches to the training data set. Aiming at the problem that the contact type morphology measurement sensor is low in measurement efficiency, the system completes high-precision encryption of sparse measurement data by combining a Gaussian process and a super-resolution technology based on deep learning, and has the advantages of being high in measurement efficiency, high in point cloud sampling precision and high in curved surface detail reducibility.

Description

High-efficient measurement system of complicated curved surface based on gaussian process migration learning
Technical Field
The invention belongs to the field of computer vision and graphic image processing, and relates to a complex curved surface efficient measurement system based on Gaussian process transfer learning.
Background
The precision and density of the point cloud are directly related to the quality of curved surface reconstruction, and higher density of the point cloud means that the point cloud contains more detailed information and has greater application potential. However, in the actual acquisition process, although a large amount of data can be acquired by a low-precision device, the precision of the point cloud is poor, so a high-precision device is usually adopted, but due to the limitation of the high-precision device and the influence of environmental factors, the efficiency of acquiring the point cloud is low, so a sparse sampling method is usually adopted, and then the point cloud is encrypted by using techniques such as interpolation, but the encryption precision is often low, and the detection efficiency and the point cloud encryption method hinder the further application of the high-precision measurement instrument. With the continuous development of computer vision technology, especially the development of deep learning, point cloud enhancement methods are more and more, the amount of calculation is more complex by directly adopting a point cloud up-sampling network, most of machining surfaces are 2.5D curved surfaces, the surfaces are projected to a two-dimensional space, and the utilization of an image up-sampling technology is an effective means. The image super-resolution network is an up-sampling technology for image pixels, can sample a low-resolution image to a high-resolution image by a software means, has been widely applied in the field of image enhancement, and has achieved good effects. However, the image hyper-resolution network needs a large amount of existing data pairs for training, the data pairs are often lacked in an actual scene, the data pairs generated by a degradation model and input data in an inference stage often have distribution differences, and generalization performance is poor.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides a complex curved surface efficient measurement system based on Gaussian process transfer learning, which transfers the distribution of measured data to the distribution of synthetic data in a low-dimensional implicit space, improves the generalization capability on the measured data, and finally obtains high-density and high-precision point cloud data by using less sparse measurement data and an up-sampling technology, thereby recovering detailed information and meeting the requirement of reconstruction precision, and the specific technical scheme is as follows:
a high-efficiency complex surface measuring system based on Gaussian process transfer learning comprises: the system comprises a point cloud self-adaptive sampling module, a curved surface registration and sparse error reconstruction module, a pixelation and normalization module, an encoder module, a Gaussian process processing module, a decoder module, a de-normalization module and a point cloud space mapping module which are sequentially connected; the point cloud self-adaptive sampling module is used for self-adaptively sampling point clouds; the curved surface registration and sparse error reconstruction module is used for carrying out coarse registration and fine registration on the sampled point cloud, comparing the point cloud with a designed curved surface and outputting an error of an actual point cloud, namely a measurement error of the point cloud; the pixelation and normalization module converts the measurement error data of the point cloud into pixel information of an image to obtain a sparse error map, and then normalization processing is carried out; the encoder module receives an input sparse error map, extracts characteristic information and outputs a corresponding implicit vector; the Gaussian process processing module converts the implicit vector into a Gaussian distribution space, adopts square exponential kernel function modeling and guides unpaired data training; the decoder module inputs the characteristic information extracted by the encoder module and outputs an error map of a target resolution; the de-normalization module and the point cloud space mapping module are used for re-mapping the enhanced pixel-based gray image information into a curved surface space, so that the high-precision point cloud after up-sampling is obtained.
Furthermore, the point cloud self-adaptive sampling module selects an area with larger Gaussian curvature for preferential acquisition by utilizing the prior knowledge of the existing design curved surface, a plurality of acquired point clouds are used for constructing an initial Gaussian process model, then the geometric outline output by the model is compared with the design curved surface to obtain a reconstruction error, the reconstruction error and the uncertainty output by the model are used as sampling criteria for acquiring subsequent point clouds, the target sampling point is the area with larger reconstruction error and uncertainty, and the sampling is stopped until the number of sampling points and the error are met.
Further, the curved surface registration and sparse error reconstruction module performs coarse registration by using gaussian curvature, then calculates the cross correlation of the two groups of gaussian curvatures and performs normalization processing, finds the peak point coordinates, determines the corresponding positions of the two groups of data, obtains a rigid body transformation matrix, then performs fine registration by combining a closest point iteration method, finally unifies the two groups of data into the same coordinate system, compares the measurement point with the designed curved surface, and outputs the error of the actual measurement point.
Further, the pixelation and normalization module converts the measurement error data of the point cloud into pixel information of the image, the position of the point cloud is a pixel position, the height information of the point cloud is gray information of the image, the value of the position where the point cloud is not measured is set to be 0, and then the sparse error map is normalized.
Further, the encoder module adopts 4 convolution modules
Figure 840820DEST_PATH_IMAGE001
i=0,1,2,3, wherein
Figure 215039DEST_PATH_IMAGE002
A plurality of residual channel attention modules are adopted for extracting effective contour and texture information from sparse features,
Figure 355033DEST_PATH_IMAGE002
the system is composed of 2 convolution layers of 3 multiplied by 64 multiplied by 1 and 6 residual channel attention units which are connected in series, wherein the 3 multiplied by 64 multiplied by 1 convolution layers represent the size of convolution kernels, 64 represents the number of the convolution kernels, and the last bit represents the motion step of the convolution kernels;
each residual channel attention unit comprises a residual unit and a channel attention unit, the characteristics of an input image are extracted through the residual unit, the characteristics are input into the channel attention unit to obtain a channel calibration coefficient vector beta, and the channel calibration coefficient vector beta and the input characteristics of the channel attention unit are recalibrated to be used as the output of the residual channel attention unit;
the residual error unit comprises two branches, wherein one branch of the residual error unit inputs the input signal through a 3 × 3 × 64 × 1 convolution, an LReLU nonlinear transformation layer and another 3 × 3 × 64 × 1 convolution in sequence, and the other branch of the residual error unit directly adds the input signal with the output of the first branch without any change;
the remaining three convolution modules
Figure 794105DEST_PATH_IMAGE001
i=1,2,3 comprising convolutional layers of one step 1, active layers and convolutional layers of one step 2, passing through each convolutional module
Figure 335944DEST_PATH_IMAGE001
i=0,1,2,3, characterized thereafter
Figure 320212DEST_PATH_IMAGE003
i=0,1,2,3, resulting in a series of low dimensionsImplicit vector of space
Figure 947502DEST_PATH_IMAGE004
Figure 190265DEST_PATH_IMAGE005
Each sparse data input obtains a corresponding implicit vector.
Further, the channel attention unit comprises a global average pooling layer, a first convolution layer, a ReLU nonlinear transformation layer, a second convolution layer and a Sigmoid nonlinear transformation layer.
Further, the decoder module comprises a decoder
Figure 632616DEST_PATH_IMAGE006
i=0,1,2,3, convolution module
Figure 302632DEST_PATH_IMAGE007
Extracted feature information
Figure 417219DEST_PATH_IMAGE008
Input to a decoder
Figure 463672DEST_PATH_IMAGE009
In (1),
Figure 199678DEST_PATH_IMAGE009
output result and convolution module
Figure 40595DEST_PATH_IMAGE010
Outputting the result
Figure 642478DEST_PATH_IMAGE011
Input to a decoder
Figure 227043DEST_PATH_IMAGE012
In (1),
Figure 847249DEST_PATH_IMAGE012
output result and convolution module
Figure 859067DEST_PATH_IMAGE013
Outputting the result
Figure 479405DEST_PATH_IMAGE014
Input to a decoder
Figure 352814DEST_PATH_IMAGE015
In (1),
Figure 578259DEST_PATH_IMAGE015
output result and convolution module
Figure 26558DEST_PATH_IMAGE002
Extracted feature information
Figure 337453DEST_PATH_IMAGE016
Input to a decoder
Figure 309826DEST_PATH_IMAGE017
Finally, an error map of the target resolution is obtained.
Further, the first three decoders in the decoder module
Figure 389778DEST_PATH_IMAGE006
i=0,1,2, comprising in sequence a 3 × 3 × 64 × 1 convolutional layer, an lreol nonlinear transform layer, two residual units, a 2 times upsampled sub-pixel convolutional layer, and an lreol nonlinear transform layer, said residual unit comprising two branches, one of which passes the input sequentially through a 3 × 3 × 64 × 1 convolutional layer, an lreol nonlinear transform layer and a 3 × 3 × 64 × 1 convolutional layer, the other branch passes the input without any change and adds the input directly with the output of the first branch, and the last decoder
Figure 743399DEST_PATH_IMAGE017
Comprising a 3 x 1 convolutional layer.
An unpaired data training method for the complex surface efficient measurement system based on Gaussian process transfer learning, wherein the unpaired data training data comprises the following data: according to a small amount of actually processed part error data and a data pair with a large amount of introduced fractal Brownian motion synthesis data;
the training specifically comprises the following two stages:
in the first stage, synthetic data pairs are used for training, optimization parameters are obtained through a minimum loss function, then synthetic sparse data are input into a network, and implicit vectors are obtained through a coder module
Figure 292323DEST_PATH_IMAGE018
And the method is used for subsequent unsupervised learning, wherein the loss function expression is as follows:
Figure 100002_DEST_PATH_IMAGE019
Figure 350277DEST_PATH_IMAGE020
indicating the loss of perception of the content,
Figure 534003DEST_PATH_IMAGE021
representing the output of the input after it has passed through the network,
Figure 58525DEST_PATH_IMAGE022
is the corresponding true value information for supervision,
Figure 140751DEST_PATH_IMAGE023
representing a content perceptual loss coefficient;
and in the second stage, inputting actually measured sparse data, further improving model parameters to enable the model parameters to be suitable for actual conditions, mapping the model parameters into a low-dimensional implicit space through an encoder module for supervised learning, and inputting an implicit vector in the low-dimensional implicit space and an implicit vector for synthesizing sparse data into a Gaussian process model together to finish training.
Further, the second stage specifically includes:
inputting actual measured sparse data, further improving model parameters to enable the model parameters to be suitable for actual conditions, mapping the data into a low-dimensional implicit space through an encoder module for supervised learning because the data does not have corresponding high-resolution data and is usually inconsistent with the distribution of synthesized training data, and marking an implicit vector of the data in the low-dimensional implicit space as
Figure 894074DEST_PATH_IMAGE024
Figure 948618DEST_PATH_IMAGE025
Further, will
Figure 706358DEST_PATH_IMAGE024
Figure 728410DEST_PATH_IMAGE025
And synthesizing implicit vectors of sparse data
Figure 269113DEST_PATH_IMAGE004
Figure 240480DEST_PATH_IMAGE005
And are input into the Gaussian process model together,
Figure 857537DEST_PATH_IMAGE004
as a known quantity, the quantity to be measured can be determined by means of a kernel function
Figure 852038DEST_PATH_IMAGE024
The value of (A):
Figure 258748DEST_PATH_IMAGE026
Figure 271573DEST_PATH_IMAGE027
Figure 574378DEST_PATH_IMAGE028
the variance of the noise is represented by a variance of the noise,
Figure 852912DEST_PATH_IMAGE029
is that
Figure 17309DEST_PATH_IMAGE024
Corresponding predicted mean values as true values for supervising implicit vectors
Figure 635372DEST_PATH_IMAGE024
Figure 109079DEST_PATH_IMAGE030
Is that
Figure 78172DEST_PATH_IMAGE024
Corresponding prediction variance due to
Figure 279215DEST_PATH_IMAGE031
All implicit vectors containing synthetic data pairs are more and not all features are close to the implicit vectors of the actually measured sparse data, so that a nearest neighbor method is adopted to extract the implicit vectors from a feature library
Figure 282943DEST_PATH_IMAGE031
Selecting and
Figure 724288DEST_PATH_IMAGE024
a collection of implicit vectors with the closest features
Figure 931410DEST_PATH_IMAGE032
Generated by
Figure 686876DEST_PATH_IMAGE033
Is closer to
Figure 279532DEST_PATH_IMAGE024
Figure 95041DEST_PATH_IMAGE034
Smaller value, adoptNearest neighbor method from features
Figure 287994DEST_PATH_IMAGE031
Selecting and
Figure 847151DEST_PATH_IMAGE024
set of implicit vectors with most distant features
Figure 559892DEST_PATH_IMAGE035
Generate, generate
Figure 15144DEST_PATH_IMAGE036
Further away from
Figure 196858DEST_PATH_IMAGE024
Figure 825285DEST_PATH_IMAGE036
The larger the value, the loss function is expressed as:
Figure 658112DEST_PATH_IMAGE037
the training at this stage is to ensure the effect on the first-stage synthesized data, so the overall loss function is as follows:
Figure 799112DEST_PATH_IMAGE038
Figure 100002_DEST_PATH_IMAGE039
and representing the loss coefficient under the training of the unsupervised actual measurement sparse data.
The method has the advantages that paired data are generated by utilizing information of fractal Brown and an actual processed curved surface, processing errors are simulated, supervised learning of 2.5D curved surface point cloud is carried out, then partial actually measured sparse data is combined to provide semi-supervised learning, actually measured data and synthetic data are mapped into a low-dimensional implicit space through a coder and a decoder, the low-dimensional space is converted into a Gaussian distribution space through a Gaussian process, data of the real world can be converted into distribution relatively close to training data, the influence of actual noise is not easy to occur, details of input features are enhanced, the generalization performance of a network is enhanced, and the method has a good effect in up-sampling of the actually measured data.
Drawings
FIG. 1 is an overall flow diagram of the present invention;
fig. 2 is a block diagram of a core module consisting of a codec module and a gaussian process module according to the present invention.
Detailed Description
In order to make the objects, technical solutions and technical effects of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and examples.
In order to overcome the deficiency of actual scene data pairs and improve generalization performance, the complex curved surface efficient measurement system based on Gaussian process transfer learning provided by the invention firstly simulates processing error data based on fractal Brownian motion according to the data scale and noise scale of detection equipment, the fractal Brownian motion is very close to the actual processing error data distribution, a simulated data pair is generated through the fractal Brownian motion, then a plurality of real measurement data are collected, the data are projected into a low-dimensional implicit space through a coder-decoder, then the real data distribution is approximated to the synthetic data distribution by utilizing the Gaussian process in the space, and the over-distribution effect on the synthetic data is transferred to the actual data, so that the generalization performance in the actual data is improved.
As shown in fig. 1, the system includes: the system comprises a point cloud self-adaptive sampling module, a curved surface registration and sparse error reconstruction module, a pixelation and normalization module, an encoder module, a Gaussian process processing module, a decoder module, a de-normalization module and a point cloud space mapping module which are sequentially connected.
The point cloud self-adaptive sampling module is used for guiding a sensor to perform self-adaptive sampling of point clouds, the module selects an area with larger Gaussian curvature to perform preferential acquisition by utilizing the prior knowledge of the existing design curved surface, then uses a plurality of acquired point clouds for constructing an initial Gaussian process model, then compares the geometric outline output by the model with the design curved surface to obtain a reconstruction error, uses the reconstruction error and the uncertainty output by the model as sampling criteria to perform subsequent point cloud acquisition, and stops sampling until the number of sampling points and the error are met.
According to common surface geometries, a composite kernel function is used in the gaussian process model, which is a combination of one or more of a square exponential kernel function, a family of matern kernel functions, a periodic kernel function, and a white noise kernel function.
The curved surface registration and sparse error reconstruction module performs coarse registration by using Gaussian curvature, then calculates the cross correlation of the two groups of Gaussian curvatures and performs normalization processing, finds the peak point coordinates, determines the corresponding positions of the two groups of data, obtains a rigid body transformation matrix, then performs fine registration by combining a closest point iteration method, finally unifies the two groups of data into the same coordinate system, compares the measurement point with the designed curved surface, and outputs the error of the actual measurement point.
The pixelation and normalization module converts the measurement error data of the point cloud into pixel information of an image, the position of the point cloud is a pixel position, the height information of the point cloud is gray information of the image, the value of the position without the measurement point cloud is set to be 0, and then the sparse error map is subjected to normalization processing.
As shown in fig. 2, the encoder module, the gaussian process processing module and the decoder module are core modules of the present invention, wherein the encoder module employs 4 convolution modules
Figure 248548DEST_PATH_IMAGE001
i=0,1,2,3, wherein
Figure 149508DEST_PATH_IMAGE002
A plurality of residual channel attention modules are adopted for extracting effective contour and texture information from sparse features,
Figure 321995DEST_PATH_IMAGE002
the method comprises the steps that 2 3 × 3 × 64 × 01 convolutional layers and 6 residual channel attention units which are connected in series are formed, wherein 3 × 43 of the 3 × 13 × 264 × 31 convolutional layers represents the size of a convolution kernel, 64 represents the number of the convolution kernels, the last bit represents the motion step of the convolution kernel, each residual channel attention unit comprises a residual unit and a channel attention unit, the features of an input image are extracted through the residual units, the features are input into the channel attention units to obtain channel calibration coefficient vectors beta, and the channel calibration coefficient vectors beta and the input features of the channel attention units are recalibrated to be used as the output of the residual channel attention units; the residual error unit comprises two branches, wherein one branch of the residual error unit inputs the input signal through a 3 × 3 × 64 × 1 convolution, an LReLU nonlinear transformation layer and another 3 × 3 × 64 × 1 convolution in sequence, and the other branch of the residual error unit directly adds the input signal with the output of the first branch without any change; the channel attention unit comprises a global average pooling layer, a first convolution layer, a ReLU nonlinear transformation layer, a second convolution layer, a Sigmoid nonlinear transformation layer and the remaining three convolution modules
Figure 650208DEST_PATH_IMAGE001
i=1,2,3 comprising convolutional layers of one step 1, active layers and convolutional layers of one step 2, passing through each convolutional module
Figure 586940DEST_PATH_IMAGE001
i=0,1,2,3, characterized thereafter
Figure 540858DEST_PATH_IMAGE003
i=0,1,2,3, finally obtaining a series of implicit vectors of low-dimensional space
Figure 82698DEST_PATH_IMAGE004
Figure 785075DEST_PATH_IMAGE005
Each sparse input obtains a corresponding implicit vector.
The Gaussian process processing module converts the implicit vector into a Gaussian distribution space for guiding the training of unpaired data and adopts a square exponential kernel function for modeling.
The decoder module comprises a decoder
Figure 209103DEST_PATH_IMAGE006
i=0,1,2,3,
Figure 202598DEST_PATH_IMAGE007
Extracted feature information
Figure 598944DEST_PATH_IMAGE008
Input to a decoder
Figure 268960DEST_PATH_IMAGE009
In (1),
Figure 383546DEST_PATH_IMAGE009
output the result and
Figure 148109DEST_PATH_IMAGE010
outputting the result
Figure 664541DEST_PATH_IMAGE011
Input to a decoder
Figure 771037DEST_PATH_IMAGE012
In (1),
Figure 107341DEST_PATH_IMAGE012
output the result and
Figure 442638DEST_PATH_IMAGE013
outputting the result
Figure 813577DEST_PATH_IMAGE014
Input to a decoder
Figure 825395DEST_PATH_IMAGE015
In (1),
Figure 648995DEST_PATH_IMAGE015
output the result and
Figure 286518DEST_PATH_IMAGE002
extracted feature information
Figure 511963DEST_PATH_IMAGE016
Input to a decoder
Figure 491420DEST_PATH_IMAGE017
Finally obtaining an error map of the target resolution, and the first three decoders in the decoder module
Figure 818628DEST_PATH_IMAGE006
i=0,1,2, comprising in sequence a 3 × 3 × 64 × 1 convolutional layer, an lreol nonlinear transform layer, two residual units, a 2 times upsampled sub-pixel convolutional layer, and an lreol nonlinear transform layer, said residual unit comprising two branches, one of which passes the input sequentially through a 3 × 3 × 64 × 1 convolutional layer, an lreol nonlinear transform layer and a 3 × 3 × 64 × 1 convolutional layer, the other branch passes the input without any change and adds the input directly with the output of the first branch, and the last decoder
Figure 744995DEST_PATH_IMAGE017
Comprising a 3 x 1 convolutional layer.
In the training stage of the core module, according to the machining error data characteristics of the curved surface, the training data comprises two parts, one part utilizes a small amount of actually machined part error data, the data is sparse, no high-resolution data is used for supervision, and the data scale is small; on the other hand, according to the scale range and the noise condition of the actual processing error, particularly, a data pair with more fractal Brownian motion synthesis data volume is introduced for simulating the actual processing error;
the training is specifically divided into two stages:
first stage, using synthesisThe data pairs are trained using a loss function including content aware loss
Figure 824947DEST_PATH_IMAGE020
And pixel loss
Figure 444147DEST_PATH_IMAGE040
The loss function is as follows:
Figure 491606DEST_PATH_IMAGE019
wherein
Figure 221665DEST_PATH_IMAGE021
Representing the output of the input after it has passed through the network,
Figure 156123DEST_PATH_IMAGE022
is the corresponding true value information for supervision,
Figure 946224DEST_PATH_IMAGE023
expressing content perception loss coefficient, obtaining optimized parameter of the first stage by minimizing loss function, inputting synthesized sparse data into network after optimized parameter is determined, obtaining a series of implicit vectors by encoder module and storing the vectors as
Figure 982445DEST_PATH_IMAGE018
For subsequent unsupervised learning;
and in the second stage, inputting actually measured sparse data, further improving model parameters to enable the model parameters to be suitable for actual conditions, mapping the data into a low-dimensional implicit space for supervised learning through an encoder module because the data does not have corresponding high-resolution data and is usually inconsistent with the distribution of synthesized training data, and marking an implicit vector of the data in the low-dimensional implicit space as
Figure 250615DEST_PATH_IMAGE024
Figure 39579DEST_PATH_IMAGE025
Further, will
Figure 582DEST_PATH_IMAGE024
Figure 757054DEST_PATH_IMAGE025
And synthesizing implicit vectors of sparse data
Figure 828916DEST_PATH_IMAGE004
Figure 3545DEST_PATH_IMAGE005
And are input into the Gaussian process model together,
Figure 869870DEST_PATH_IMAGE004
as a known quantity, the quantity to be measured can be determined by means of a kernel function
Figure 880682DEST_PATH_IMAGE024
The value of (A):
Figure 490655DEST_PATH_IMAGE026
Figure 519791DEST_PATH_IMAGE027
Figure 557017DEST_PATH_IMAGE028
representing the variance of the noise, the initial value is set to 0.005,
Figure 577098DEST_PATH_IMAGE029
is that
Figure 990762DEST_PATH_IMAGE024
Corresponding predicted mean values as true values for supervising implicit vectors
Figure 874405DEST_PATH_IMAGE024
Figure 895581DEST_PATH_IMAGE030
Is that
Figure 864674DEST_PATH_IMAGE024
The smaller the value, the better the corresponding prediction variance. Due to the fact that
Figure 816450DEST_PATH_IMAGE031
All implicit vectors containing synthetic data pairs are more and not all features are close to the implicit vectors of the actually measured sparse data, so that a nearest neighbor method is adopted to extract the implicit vectors from a feature library
Figure 820178DEST_PATH_IMAGE031
Selecting and
Figure 448474DEST_PATH_IMAGE024
a collection of implicit vectors with the closest features
Figure 904863DEST_PATH_IMAGE032
Generated by
Figure 660330DEST_PATH_IMAGE033
Is closer to
Figure 518564DEST_PATH_IMAGE024
Figure 84806DEST_PATH_IMAGE034
Smaller value, using nearest neighbor method to slave features
Figure 28491DEST_PATH_IMAGE031
Selecting and
Figure 322069DEST_PATH_IMAGE024
set of implicit vectors with most distant features
Figure 34810DEST_PATH_IMAGE035
Generate, generate
Figure 270489DEST_PATH_IMAGE036
Further away from
Figure 701470DEST_PATH_IMAGE024
Figure 798739DEST_PATH_IMAGE036
The larger the value, the loss function is expressed as:
Figure 365987DEST_PATH_IMAGE041
the effect of the first-stage synthesis data is also guaranteed during the second-stage training, so that the overall loss function is as follows:
Figure 274031DEST_PATH_IMAGE038
Figure 192308DEST_PATH_IMAGE039
and representing the loss coefficient under the training of the unsupervised actual measurement sparse data.
The data in the whole training stage is formed according to a certain proportion, the synthesized data pair occupies 60% -80% of the proportion, the actually measured sparse data occupies 20% -40% of the proportion, the training of the synthesized data pair is firstly carried out, then the sparse data is substituted, and the fine adjustment is carried out by combining the synthesized data pair.
The de-normalization module and the point cloud space mapping module are used for re-mapping the enhanced pixel-based gray scale image information to a 2.5D curved surface space, so that the high-precision point cloud after up-sampling is obtained.
In this embodiment, 4000 groups of data pairs are generated by using fractal brownian motion, the error scale of the data pairs is derived from the machining error of a certain actual milling machine, the peak-to-valley value of the error is about 45 micrometers to 55 micrometers, and the measurement noise follows gaussian distribution (0, 0.002)2) Selecting 64 x 64 image blocks as high resolutionThe method comprises the steps of obtaining a rate image, setting the down-sampling rate to be 5%, taking image blocks of 204 actual effective points as corresponding low-resolution images, obtaining 1000 groups of actual processing data through sparse sampling, enabling unpaired data to account for 20% of all data, taking image pairs of high and low resolutions and sparse sampling data as a training set, a verification set and a test set, training by using Adam, and training by using beta1 = 0.9, β2 = 0.999,
Figure 562110DEST_PATH_IMAGE042
The learning rate is set to 0.0002, for a total of 80 epochs, the learning rate is multiplied by 0.75 every 20 epochs,
Figure 249443DEST_PATH_IMAGE043
Figure 826924DEST_PATH_IMAGE044
and updating the network by using a back propagation strategy, and if the network is converged, saving the trained network model for final reasoning. Selecting 50 synthetic data and 50 actually measured sparse data graphs as a test set, wherein the synthetic data has corresponding high-density true value data, then obtaining approximate actually measured true value errors in a dense sampling mode, and the test results are shown in table 1, compared with a common interpolation method, the method comprises the following steps of B spline: bsplaine, kriging: kriging, a single kernel gaussian process: GP, complex kernel function gaussian process: CGP carries out the same data set training and testing, the average PSNR and SSIM of the test picture obtained by the invention obtain higher results, as shown in Table 1, the invention not only obtains good effect on synthetic data, but also obtains good effect on real data, and in addition, because the texture information of error data is single, and no picture is rich and complex, the generated evaluation index is very high.
Table 1. Performance comparison (PSNR/SSIM) of the present invention with other methods on different test datasets, with a sampling rate of 5%
Data of Ours Bspline Kriging GP CGP Unet
Synthesizing data 51.38/0.8823 43.64/0.8461 47.16/0.8573 48.25/0.8627 49.15/0.8692 47.43/0.8587
Real data 51.27/0.8812 43.52/0.8453 47.03/0.8569 48.13/0.8619 49.07/0.8687 44.53/0.8489
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way. Although the foregoing has described the practice of the present invention in detail, it will be apparent to those skilled in the art that modifications may be made to the practice of the invention as described in the foregoing examples, or that certain features may be substituted in the practice of the invention. All changes, equivalents and modifications which come within the spirit and scope of the invention are desired to be protected.

Claims (8)

1. A high-efficient measurement system of complex curved surface based on gaussian process migration learning, characterized by comprising: the system comprises a point cloud self-adaptive sampling module, a curved surface registration and sparse error reconstruction module, a pixelation and normalization module, an encoder module, a Gaussian process processing module, a decoder module, a de-normalization module and a point cloud space mapping module which are sequentially connected; the point cloud self-adaptive sampling module is used for self-adaptively sampling point clouds; the curved surface registration and sparse error reconstruction module is used for carrying out coarse registration and fine registration on the sampled point cloud, comparing the point cloud with a designed curved surface and outputting an error of an actual point cloud, namely a measurement error of the point cloud; the pixelation and normalization module converts the measurement error data of the point cloud into pixel information of an image to obtain a sparse error map, and then normalization processing is carried out; the encoder module receives an input sparse error map, extracts characteristic information and outputs a corresponding implicit vector; the Gaussian process processing module converts the implicit vector into a Gaussian distribution space, adopts square exponential kernel function modeling and guides unpaired data training; the decoder module inputs the characteristic information extracted by the encoder module and outputs an error map of a target resolution; the de-normalization module and the point cloud space mapping module are used for re-mapping the enhanced pixel-based gray image information into a curved surface space so as to obtain the high-precision point cloud after up-sampling;
the encoder module adopts 4 convolution modules
Figure DEST_PATH_IMAGE001
i=0,1,2,3, wherein
Figure DEST_PATH_IMAGE002
Attention model using multiple residual channelsA block for extracting valid contour and texture information from sparse features,
Figure 766003DEST_PATH_IMAGE002
the system is composed of 2 convolution layers of 3 multiplied by 64 multiplied by 1 and 6 residual channel attention units which are connected in series, wherein the 3 multiplied by 64 multiplied by 1 convolution layers represent the size of convolution kernels, 64 represents the number of the convolution kernels, and the last bit represents the motion step of the convolution kernels;
each residual channel attention unit comprises a residual unit and a channel attention unit, the characteristics of an input image are extracted through the residual unit, the characteristics are input into the channel attention unit to obtain a channel calibration coefficient vector beta, and the channel calibration coefficient vector beta and the input characteristics of the channel attention unit are recalibrated to be used as the output of the residual channel attention unit;
the residual error unit comprises two branches, wherein one branch of the residual error unit inputs the input signal through a 3 × 3 × 64 × 1 convolution, an LReLU nonlinear transformation layer and another 3 × 3 × 64 × 1 convolution in sequence, and the other branch of the residual error unit directly adds the input signal with the output of the first branch without any change;
the remaining three convolution modules
Figure 151985DEST_PATH_IMAGE001
i=1,2,3 comprising convolutional layers of one step 1, active layers and convolutional layers of one step 2, passing through each convolutional module
Figure 899099DEST_PATH_IMAGE001
i=0,1,2,3, characterized thereafter
Figure DEST_PATH_IMAGE003
i=0,1,2,3, finally obtaining a series of implicit vectors of low-dimensional space
Figure DEST_PATH_IMAGE004
Figure DEST_PATH_IMAGE005
Each sparse data input obtains a corresponding implicit vector.
2. The system for efficiently measuring the complex curved surface based on the transfer learning of the gaussian process as claimed in claim 1, wherein the point cloud adaptive sampling module selects an area with a large gaussian curvature for preferential acquisition by using prior knowledge of the existing design curved surface, uses a plurality of acquired point clouds for constructing an initial gaussian process model, then compares the geometric profile output by the model with the design curved surface to obtain a reconstruction error, uses the reconstruction error and the uncertainty output by the model as sampling criteria for subsequent point cloud acquisition, and stops sampling until the number of sampling points and the error are met.
3. The system according to claim 1, wherein the curved surface registration and sparse error reconstruction module performs coarse registration based on gaussian curvature, calculates cross-correlation between the two gaussian curvatures and performs normalization processing, finds a peak point coordinate, determines corresponding positions of the two sets of data, obtains a rigid transformation matrix, performs fine registration by combining a closest point iteration method, unifies the two sets of data into a same coordinate system, compares a measurement point with a designed curved surface, and outputs an error of an actual measurement point.
4. The system as claimed in claim 1, wherein the pixelation and normalization module converts the measurement error data of the point cloud into pixel information of the image, the position of the point cloud is the pixel position, the height information of the point cloud is the gray information of the image, for the position without the measured point cloud, the value of the position is set to 0, and then the sparse error map is normalized.
5. The complex surface efficient measurement system based on gaussian process transfer learning of claim 1, wherein the channel attention unit comprises a global average pooling layer, a first convolution layer, a ReLU nonlinear transformation layer, a second convolution layer and a Sigmoid nonlinear transformation layer.
6. The system of claim 1, wherein the decoder module comprises a decoder
Figure DEST_PATH_IMAGE006
i=0,1,2,3, convolution module
Figure DEST_PATH_IMAGE007
Extracted feature information
Figure DEST_PATH_IMAGE008
Input to a decoder
Figure DEST_PATH_IMAGE009
In (1),
Figure 576199DEST_PATH_IMAGE009
output result and convolution module
Figure DEST_PATH_IMAGE010
Outputting the result
Figure DEST_PATH_IMAGE011
Input to a decoder
Figure DEST_PATH_IMAGE012
In (1),
Figure 561210DEST_PATH_IMAGE012
output result and convolution module
Figure DEST_PATH_IMAGE013
Outputting the result
Figure DEST_PATH_IMAGE014
Input to a decoder
Figure DEST_PATH_IMAGE015
In (1),
Figure 790197DEST_PATH_IMAGE015
output result and convolution module
Figure 526072DEST_PATH_IMAGE002
Extracted feature information
Figure DEST_PATH_IMAGE016
Input to a decoder
Figure DEST_PATH_IMAGE017
Finally, an error map of the target resolution is obtained.
7. The system of claim 6, wherein the first three decoders in the decoder module are selected from the group consisting of a Gaussian process transfer learning-based complex surface efficient measurement system and a Gaussian process transfer learning-based complex surface efficient measurement system
Figure 364453DEST_PATH_IMAGE006
i=0,1,2, comprising in sequence a 3 × 3 × 64 × 1 convolutional layer, an lreol nonlinear transform layer, two residual units, a 2 times upsampled sub-pixel convolutional layer, and an lreol nonlinear transform layer, said residual unit comprising two branches, one of which passes the input sequentially through a 3 × 3 × 64 × 1 convolutional layer, an lreol nonlinear transform layer and a 3 × 3 × 64 × 1 convolutional layer, the other branch passes the input without any change and adds the input directly with the output of the first branch, and the last decoder
Figure 502173DEST_PATH_IMAGE017
Comprising a 3X 1The above-mentioned convolutional layer.
8. An unpaired data training method for the complex surface efficient measurement system based on Gaussian process transfer learning of claim 1, wherein the unpaired data training data comprises: according to a small amount of actually processed part error data and a data pair with a large amount of introduced fractal Brownian motion synthesis data;
the training specifically comprises the following two stages:
in the first stage, synthetic data pairs are used for training, optimization parameters are obtained through a minimum loss function, then synthetic sparse data are input into a network, and implicit vectors are obtained through a coder module
Figure DEST_PATH_IMAGE018
And the method is used for subsequent unsupervised learning, wherein the loss function expression is as follows:
Figure DEST_PATH_IMAGE019
Figure DEST_PATH_IMAGE020
indicating the loss of perception of the content,
Figure DEST_PATH_IMAGE021
representing the output of the input after it has passed through the network,
Figure DEST_PATH_IMAGE022
is the corresponding true value information for supervision,
Figure DEST_PATH_IMAGE023
representing a content perceptual loss coefficient;
and in the second stage, inputting actually measured sparse data, further improving model parameters to enable the model parameters to be suitable for actual conditions, mapping the model parameters into a low-dimensional implicit space through an encoder module for supervised learning, and inputting an implicit vector in the low-dimensional implicit space and an implicit vector for synthesizing sparse data into a Gaussian process model together to complete training, wherein the training is specifically as follows:
inputting actual measured sparse data, further improving model parameters to enable the model parameters to be suitable for actual conditions, mapping the data into a low-dimensional implicit space through an encoder module for supervised learning because the data does not have corresponding high-resolution data and is usually inconsistent with the distribution of synthesized training data, and marking an implicit vector of the data in the low-dimensional implicit space as
Figure DEST_PATH_IMAGE024
Figure DEST_PATH_IMAGE025
Further, will
Figure 744805DEST_PATH_IMAGE024
Figure 764713DEST_PATH_IMAGE025
And synthesizing implicit vectors of sparse data
Figure 970567DEST_PATH_IMAGE004
Figure 962793DEST_PATH_IMAGE005
And are input into the Gaussian process model together,
Figure 861479DEST_PATH_IMAGE004
as a known quantity, the quantity to be measured can be determined by means of a kernel function
Figure 571946DEST_PATH_IMAGE024
The value of (A):
Figure DEST_PATH_IMAGE026
Figure DEST_PATH_IMAGE027
Figure DEST_PATH_IMAGE028
the variance of the noise is represented by a variance of the noise,
Figure DEST_PATH_IMAGE029
is that
Figure 424233DEST_PATH_IMAGE024
Corresponding predicted mean values as true values for supervising implicit vectors
Figure 333284DEST_PATH_IMAGE024
Figure DEST_PATH_IMAGE030
Is that
Figure 606133DEST_PATH_IMAGE024
Corresponding prediction variance due to
Figure DEST_PATH_IMAGE031
All implicit vectors containing synthetic data pairs are more and not all features are close to the implicit vectors of the actually measured sparse data, so that a nearest neighbor method is adopted to extract the implicit vectors from a feature library
Figure 7159DEST_PATH_IMAGE031
Selecting and
Figure 53350DEST_PATH_IMAGE024
a collection of implicit vectors with the closest features
Figure DEST_PATH_IMAGE032
Generated by
Figure DEST_PATH_IMAGE033
Is closer to
Figure 754589DEST_PATH_IMAGE024
Figure DEST_PATH_IMAGE034
Smaller value, using nearest neighbor method to slave features
Figure 463920DEST_PATH_IMAGE031
Selecting and
Figure 883400DEST_PATH_IMAGE024
set of implicit vectors with most distant features
Figure DEST_PATH_IMAGE035
Generate, generate
Figure DEST_PATH_IMAGE036
Further away from
Figure 405385DEST_PATH_IMAGE024
Figure 226711DEST_PATH_IMAGE036
The larger the value, the loss function is expressed as:
Figure DEST_PATH_IMAGE037
the training at this stage is to ensure the effect on the first-stage synthesized data, so the overall loss function is as follows:
Figure DEST_PATH_IMAGE038
Figure DEST_PATH_IMAGE039
and representing the loss coefficient under the training of the unsupervised actual measurement sparse data.
CN202110987333.8A 2021-08-26 2021-08-26 High-efficient measurement system of complicated curved surface based on gaussian process migration learning Active CN113436237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110987333.8A CN113436237B (en) 2021-08-26 2021-08-26 High-efficient measurement system of complicated curved surface based on gaussian process migration learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110987333.8A CN113436237B (en) 2021-08-26 2021-08-26 High-efficient measurement system of complicated curved surface based on gaussian process migration learning

Publications (2)

Publication Number Publication Date
CN113436237A CN113436237A (en) 2021-09-24
CN113436237B true CN113436237B (en) 2021-12-21

Family

ID=77798046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110987333.8A Active CN113436237B (en) 2021-08-26 2021-08-26 High-efficient measurement system of complicated curved surface based on gaussian process migration learning

Country Status (1)

Country Link
CN (1) CN113436237B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820329B (en) * 2022-07-01 2022-11-25 之江实验室 Curved surface measuring method and device based on Gaussian process large-kernel attention device guidance
CN115761137B (en) * 2022-11-24 2023-12-22 之江实验室 High-precision curved surface reconstruction method and device based on mutual fusion of normal vector and point cloud data
CN117635679A (en) * 2023-12-05 2024-03-01 之江实验室 Curved surface efficient reconstruction method and device based on pre-training diffusion probability model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112581626A (en) * 2021-02-23 2021-03-30 之江实验室 Complex curved surface measurement system based on non-parametric and multi-attention force mechanism
CN112819716A (en) * 2021-01-29 2021-05-18 西安交通大学 Unsupervised learning X-ray image enhancement method based on Gauss-Laplacian pyramid

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833666A (en) * 2009-03-11 2010-09-15 中国科学院自动化研究所 Estimation method of scattered point cloud data geometric senses
CN109544612B (en) * 2018-11-20 2021-06-22 西南石油大学 Point cloud registration method based on feature point geometric surface description
CN109886871B (en) * 2019-01-07 2023-04-07 国家新闻出版广电总局广播科学研究院 Image super-resolution method based on channel attention mechanism and multi-layer feature fusion
CN112348959B (en) * 2020-11-23 2024-02-13 杭州师范大学 Adaptive disturbance point cloud up-sampling method based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819716A (en) * 2021-01-29 2021-05-18 西安交通大学 Unsupervised learning X-ray image enhancement method based on Gauss-Laplacian pyramid
CN112581626A (en) * 2021-02-23 2021-03-30 之江实验室 Complex curved surface measurement system based on non-parametric and multi-attention force mechanism

Also Published As

Publication number Publication date
CN113436237A (en) 2021-09-24

Similar Documents

Publication Publication Date Title
CN113436237B (en) High-efficient measurement system of complicated curved surface based on gaussian process migration learning
CN110738697B (en) Monocular depth estimation method based on deep learning
CN110348330B (en) Face pose virtual view generation method based on VAE-ACGAN
CN113436211B (en) Medical image active contour segmentation method based on deep learning
CN112330724B (en) Integrated attention enhancement-based unsupervised multi-modal image registration method
CN110909615A (en) Target detection method based on multi-scale input mixed perception neural network
CN113744136A (en) Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion
CN113822825B (en) Optical building target three-dimensional reconstruction method based on 3D-R2N2
CN113112583A (en) 3D human body reconstruction method based on infrared thermal imaging
CN114332070A (en) Meteor crater detection method based on intelligent learning network model compression
CN116563682A (en) Attention scheme and strip convolution semantic line detection method based on depth Hough network
CN113378112A (en) Point cloud completion method and device based on anisotropic convolution
CN112581626B (en) Complex curved surface measurement system based on non-parametric and multi-attention force mechanism
CN112163990A (en) Significance prediction method and system for 360-degree image
CN114627017B (en) Point cloud denoising method based on multi-level attention perception
CN114331842B (en) DEM super-resolution reconstruction method combining topographic features
CN112785684B (en) Three-dimensional model reconstruction method based on local information weighting mechanism
CN115115860A (en) Image feature point detection matching network based on deep learning
CN112818920B (en) Double-temporal hyperspectral image space spectrum joint change detection method
CN115035193A (en) Bulk grain random sampling method based on binocular vision and image segmentation technology
CN114881850A (en) Point cloud super-resolution method and device, electronic equipment and storage medium
CN114612315A (en) High-resolution image missing region reconstruction method based on multi-task learning
Guanglong et al. Correlation Analysis between the Emotion and Aesthetics for Chinese Classical Garden Design Based on Deep Transfer Learning
CN114972619A (en) Single-image face three-dimensional reconstruction method based on self-alignment double regression
Song et al. Spatial-Aware Dynamic Lightweight Self-Supervised Monocular Depth Estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant