CN118097037A - Meta-universe scene reconstruction method and system - Google Patents

Meta-universe scene reconstruction method and system Download PDF

Info

Publication number
CN118097037A
CN118097037A CN202410516528.8A CN202410516528A CN118097037A CN 118097037 A CN118097037 A CN 118097037A CN 202410516528 A CN202410516528 A CN 202410516528A CN 118097037 A CN118097037 A CN 118097037A
Authority
CN
China
Prior art keywords
metacosmic
data point
sampling line
data
meta
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410516528.8A
Other languages
Chinese (zh)
Inventor
何彩珠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Longyifeng Technology Co ltd
Original Assignee
Dalian Huiyue High Tech Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Huiyue High Tech Development Co ltd filed Critical Dalian Huiyue High Tech Development Co ltd
Priority to CN202410516528.8A priority Critical patent/CN118097037A/en
Publication of CN118097037A publication Critical patent/CN118097037A/en
Pending legal-status Critical Current

Links

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to the technical field of three-dimensional scene reconstruction, in particular to a meta-universe scene reconstruction method and system. The method comprises the following steps: acquiring metadata three-dimensional point cloud data; acquiring a meta-universe target data point and a sampling line segment thereof, and acquiring sparsity of the sampling line segment according to a data value of the data point in the sampling line segment and Euclidean distance from the meta-universe target data point; classifying data points in all sampling line segments, acquiring a neighborhood preference value of each class according to a classification result and sparsity of the sampling line segments, and acquiring an initial neighborhood radius based on the neighborhood preference value; obtaining the similarity of the sampling line segments according to the data difference between the sampling line segments, and adjusting the initial neighborhood radius according to the similarity and sparsity of the sampling line segments to obtain the adjusted neighborhood radius, thereby completing filtering denoising; and (5) filtering and denoising to finish three-dimensional reconstruction. The method reduces the interference of noise points and improves the quality of three-dimensional point cloud data.

Description

Meta-universe scene reconstruction method and system
Technical Field
The invention relates to the technical field of three-dimensional scene reconstruction, in particular to a meta-universe scene reconstruction method and system.
Background
The metauniverse integrates key technologies such as artificial intelligence, space calculation, virtual reality, blockchain, digital asset and the like, and is a new state of the Internet which is centered by people and 3D immersive. Wherein the quality of the three-dimensional model directly affects the user experience.
The current method for obtaining the three-dimensional point cloud mainly comprises three-dimensional reconstruction and three-dimensional scanning equipment based on images, but the two methods are inevitably interfered by the factors such as external environment, equipment precision, object materials, algorithm precision and the like, so that measurement noise inevitably exists in a point cloud model, and the accuracy of complex areas of a scene is reduced. Therefore, the guided filtering algorithm is required to perform filtering denoising treatment on the point cloud data to obtain more accurate three-dimensional point cloud, so that the construction quality of the three-dimensional model is improved.
Disclosure of Invention
In order to solve the technical problem that more noise exists in a point cloud model, the invention provides a meta-universe scene reconstruction method and a system, and the adopted technical scheme is as follows:
In a first aspect, an embodiment of the present invention provides a meta-universe scene reconstruction method, including the following steps:
acquiring multi-component universe three-dimensional point cloud data;
Recording any one metacosmic data point in the three-dimensional point cloud data of each group of metacosmic as a metacosmic target data point, setting a plurality of sampling line segments for each metacosmic target data point, and acquiring sparsity of the sampling line segments according to Euclidean distance between the metacosmic data point and the metacosmic target data point in the sampling line segments and data value difference between the metacosmic data point and the metacosmic target data point;
Classifying the meta-cosmic data points in all the sampling line segments according to the Euclidean distance between the meta-cosmic data points in the sampling line segments and the meta-cosmic target data points, acquiring a neighborhood preference value of each type according to the number of the meta-cosmic data points in each type, the data value and the sparsity of the sampling line segments, and taking the Euclidean distance corresponding to the maximum neighborhood preference value as the initial neighborhood radius of the meta-cosmic target data points;
Obtaining the similarity between the sampling line segments according to the data value difference and the distance difference of the data points between the sampling line segments and the sparsity difference of the sampling line segments; constructing an initial neighborhood according to the radius of the initial neighborhood, and acquiring the maximum similarity of each sampling line segment according to the similarity between the sampling line segments; acquiring an adjusted neighborhood radius according to the sparsity and the maximum similarity of the sampling line segments and the data value of the original neighborhood space data point; according to the adjusted neighborhood radius, filtering and denoising of the three-dimensional point cloud data of each component element are completed;
and (3) carrying out three-dimensional point cloud data registration and coordinate conversion after filtering and denoising to finish three-dimensional reconstruction.
Preferably, the method for setting a plurality of sampling line segments for each metauniverse target data point is as follows:
a preset number of sampling line segments are acquired with each metacosmic object data point as a starting point, wherein the sampling line segments must include the preset number of metacosmic data points.
Preferably, the method for acquiring sparsity of the sampling line segment according to Euclidean distance between the metacosmic data point and the metacosmic target data point in the sampling line segment and the data value difference between the metacosmic data point and the metacosmic target data point comprises the following steps:
In the method, in the process of the invention, Representing Euclidean distance of ith metacosmic neighbor data point and metacosmic target data point,/>Representing the Euclidean distance of the (i+1) -th metacosmic neighbor data point and the metacosmic target data point,/>Data value representing the ith meta-cosmic neighbor data point,/>Data value representing metauniverse target data point,/>Representing the number of selected metauniverse data points in a sampled line segment,/>Representing the maximum Euclidean distance of a metacosmic data point and a metacosmic target data point in a sampling line segment,/>Representing the sparseness of the sampled line segments.
Preferably, the method for classifying the meta-cosmic data points in all the sampling line segments according to the euclidean distance between the meta-cosmic data points and the meta-cosmic target data points in the sampling line segments comprises the following steps:
And calculating Euclidean distances between each metacosmic adjacent data point and each metacosmic target data point in all sampling line segments, and classifying the metacosmic adjacent data points with the same Euclidean distances into a class to be marked as a target class.
Preferably, the method for obtaining the neighborhood preference value of each class according to the number of meta-cosmic data points in each class, the data value and the sparsity of the sampling line segment comprises the following steps:
Calculating the absolute value of the difference value between the data value of each metacosmic adjacent data point and the data value of each metacosmic target data point in each class and recording the absolute value as a first absolute value; and (3) recording the product of the first absolute value corresponding to each metauniverse adjacent data point and the sparsity of the sampling line segment where the first absolute value is positioned as a first product, accumulating the inversely proportional normalized value of the first product, and taking the product of the first product and the number of the metauniverse adjacent data points as a neighborhood optimal value of each class.
Preferably, the method for obtaining the similarity between the sampling line segments according to the data value difference and the distance difference of the data points between the sampling line segments and the sparsity difference of the sampling line segments comprises the following steps:
In the method, in the process of the invention, Data value representing the ith metacosmic data point in sampled segment A,/>Data value representing the ith metacosmic data point in sample line segment B,/>Representing Euclidean distance of ith metacosmic data point and metacosmic target data point in sampling line segment A,/>Representing the euclidean distance of the ith metacosmic data point from the metacosmic target data point in the sampled segment B,Representing the number of selected metauniverse data points in a sampled line segment,/>Representing the characteristic difference of the sampling line segment A and the sampling line segment B,/>Representing sparsity of sampling line segment A,/>Representing sparsity of sampling line segment B,/>Is a very small integer,/>Representing the similarity of sample line segment a to sample line segment B.
Preferably, the method for constructing the initial neighborhood according to the initial neighborhood radius and obtaining the maximum similarity of each sampling line segment according to the similarity between the sampling line segments comprises the following steps:
Constructing a circular neighborhood by taking a meta space target data point as a circle center and taking an initial neighborhood radius as a radius, and marking the circular neighborhood as an initial neighborhood;
for any one sampling line segment of the metauniverse target data point is marked as a target sampling line segment, the similarity between the target sampling line segment and all the rest sampling line segments is calculated, and the maximum similarity is marked as the maximum similarity of the target sampling line segment.
Preferably, the method for acquiring the adjusted neighborhood radius according to the sparsity and the maximum similarity of the sampling line segments and the data value of the original neighborhood metauniverse data point comprises the following steps:
In the method, in the process of the invention, Representing sparseness of the v-th sampled line segment,/>Number of sampled line segments representing metauniverse target data points,/>Maximum value representing similarity of the v-th sampling line segment and the rest sampling line segments,/>First adjustment factor representing metauniverse target data point,/>Data value representing the r-th metacosmic data point within the initial neighborhood of the metacosmic target data point,/>Data value representing metauniverse target data point,/>Representing the number of meta-cosmic data points in the initial neighborhood of the meta-cosmic target data point,/>Representing the degree of confusion of metauniverse target data points,/>Representing a linear normalization function,/>Radius size representing initial neighborhood of metauniverse target data points,/>Representing the radius size of the neighborhood after the metauniverse target data point is adjusted.
Preferably, the method for completing filtering and denoising of the three-dimensional point cloud data of each component according to the adjusted neighborhood radius comprises the following steps:
For each metacosmic object data point, calculating the reciprocal of the data value difference between the metacosmic object data point and each metacosmic data point in the neighborhood as the similarity between each metacosmic data point and the metacosmic object data point in the neighborhood, taking the ratio of the similarity between each metacosmic data point in the neighborhood and the sum of the similarity between all metacosmic data points in the neighborhood as the weight coefficient of each metacosmic data point in the neighborhood, carrying out weighted summation on all metacosmic data points in the neighborhood of the metacosmic object data point to obtain the filtering result of the metacosmic object data point, and carrying out filtering on each metacosmic object data point to finish the filtering denoising of the three-dimensional point cloud data of each component.
In a second aspect, an embodiment of the present invention further provides a metauniverse scene reconstruction system, including a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor implements the steps of any one of the foregoing metauniverse scene reconstruction methods when executing the computer program.
The invention has the following beneficial effects: in order to keep the edge and other fine characteristics of the point cloud, a plurality of sampling line segments are firstly obtained for each meta-universe data point, the sparsity is analyzed through the sampling line segments, the weight of the sampling line segments to the neighborhood size is influenced by the sparsity, the determination of the initial neighborhood is completed based on the sparsity, the built initial neighborhood guarantees smaller fluctuation of data, the edge of the point cloud can be well saved, meanwhile, in order to further distinguish details and noise, the final neighborhood size is obtained through adjustment of the initial neighborhood, the denoising and smoothing of three-dimensional point cloud data are completed through self-adaption to obtain the optimal neighborhood radius of the data points in guiding filtering, the shape characteristics and the edge information of the point cloud are effectively guaranteed, the interference of noise points is reduced, and the quality of the three-dimensional point cloud data is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a meta-universe scene reconstruction method according to an embodiment of the present invention;
Fig. 2 is a flowchart of a meta-universe scene reconstruction method according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description is given below of a meta-universe scene reconstruction method and system according to the invention, and the detailed implementation, structure, characteristics and effects thereof are as follows. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
A meta-universe scene reconstruction method and a system embodiment:
The invention provides a meta space scene reconstruction method and a system specific scheme by combining a drawing.
Referring to fig. 1, a meta-space scene reconstruction method and a system flowchart provided by an embodiment of the invention are shown, the method includes the following steps:
and S001, acquiring multi-component universe three-dimensional point cloud data.
When the metauniverse scene is reconstructed, a building model in the real world is required to be acquired, the building model is converted into metauniverse three-dimensional point cloud data, analysis and reconstruction are carried out, a binocular camera is used for acquiring images of the actual object model, the binocular camera can shoot two images of the object model at any angle at the same time, the two images under each angle are recorded as a group of images, each group of images can acquire a depth image through a stereo matching algorithm, the stereo matching algorithm is a known technology, and the depth image is converted into a group of metauniverse three-dimensional point cloud data through the prior art, which is not described in detail herein.
So far, the meta-universe three-dimensional point cloud data is obtained.
Step S002, marking any one meta-cosmic data point in the three-dimensional point cloud data of each group of meta-cosmic as a meta-cosmic target data point, setting a plurality of sampling line segments for each meta-cosmic target data point, and acquiring sparsity of the sampling line segments according to Euclidean distance between the meta-cosmic data point and the meta-cosmic target data point in the sampling line segments and data value difference between the meta-cosmic data point and the meta-cosmic target data point.
The meta-universe data points comprise space coordinates and corresponding color information, and when the meta-universe three-dimensional point cloud data are acquired, the interference of factors such as external environment, equipment precision, object materials, algorithm precision and the like can be avoided, so that measurement noise inevitably exists in a point cloud model, and the accuracy of complex areas of a scene is reduced. Therefore, the embodiment uses a guided filtering algorithm to perform filtering processing on the metauniverse three-dimensional point cloud data. The selection of the data point neighborhood and the calculation method of the similarity in the guiding filtering directly affect the filtering result, so that the embodiment adaptively obtains the optimal parameters of the guiding filtering, and further filters the meta space three-dimensional point cloud data.
The neighborhood window size of the guided filtering specifies the range of domains that are considered when performing the guided filtering, and a larger window may consider more surrounding points when smoothing the point cloud, helping to reduce the effects of noise, but may also result in excessive smoothing of detail. The selection of an appropriate window size therefore requires consideration of both the density of the point cloud and the noise level.
When the metadata three-dimensional point cloud data is acquired, noise and main body data of the point cloud are mixed again, namely small-scale noise, and when the noise is eliminated, the edges and other fine characteristics of the point cloud are kept as much as possible. And because the depth map obtained by the stereo matching algorithm is generally low in resolution, sparsity may exist in the generated meta-universe point cloud data. The meta-universe data points obtained by the steps have certain mixed noise points and can have certain sparsity, and the neighborhood window size of the guided filtering is analyzed based on the meta-universe data points.
For each set of meta-cosmic three-dimensional point cloud data, a plurality of sampling line segments are acquired by taking each meta-cosmic data point as a starting point, and the acquired sampling line segments necessarily comprise more than m meta-cosmic data points, and m is set to be 7 in the embodiment. Because the accuracy of acquiring the metacosmic three-dimensional point cloud data is low in the model, and the response metacosmic data points may not be acquired, the acquired sampling line segments must be defined to include a defined number of metacosmic data points, so that the sparsity of the sampling line segments is analyzed.
Marking any metacosmic data point as a metacosmic target data point, acquiring a sampling line segment of the metacosmic target data point, acquiring n sampling line segments in total, taking 20 in the embodiment, selecting m metacosmic data points closest to the metacosmic target data point in the sampling line segments as metacosmic adjacent data points for each sampling line segment, calculating Euclidean distances between the metacosmic data points and the metacosmic target data points in the sampling line segments, and acquiring sparsity of the sampling line segments according to data values of the metacosmic data points and Euclidean distances between the metacosmic data points in the sampling line segments, wherein the formula is as follows:
In the method, in the process of the invention, Representing Euclidean distance of ith metacosmic neighbor data point and metacosmic target data point,/>Representing the Euclidean distance of the (i+1) -th metacosmic neighbor data point and the metacosmic target data point,/>Data value representing the ith meta-cosmic neighbor data point,/>Data value representing metauniverse target data point,/>Representing the number of selected metauniverse data points in a sampled line segment,/>Representing the maximum Euclidean distance of a metacosmic data point and a metacosmic target data point in a sampling line segment,/>Representing the sparseness of the sampled line segments.
The greater the sparsity corresponding to the sampling line segment, the more discrete the distribution of the data points on the sampling line segment is, namely, for the adjacent meta-universe data points of the sampling line segment, the greater the data change of the more discrete sampling line segment, the smaller the influence weight of the sampling line segment on the neighborhood selection is. The smaller the sparsity corresponding to the sampling line segment, the more concentrated the distribution of the data points on the sampling line segment, and the smaller the data change is for the metauniverse data points adjacent to the sampling line segment, the smaller the data change of the sampling line segment in the more concentrated is, and the larger the influence weight of the sampling line segment on the neighborhood selection is.
So far, the sparsity of one sampling line segment corresponding to the metauniverse data point is obtained.
And S003, classifying the meta-cosmic data points in all the sampling line segments according to Euclidean distances between the meta-cosmic data points in the sampling line segments and the meta-cosmic target data points, acquiring a neighborhood preference value of each type according to the number of the meta-cosmic data points in each type, the data value and the sparsity of the sampling line segments, and taking the Euclidean distance corresponding to the maximum neighborhood preference value as the initial neighborhood radius of the meta-cosmic target data points.
According to the steps, sparsity of all sampling line segments corresponding to each metacosmic target data point can be obtained, and at the same time, the selected m metacosmic adjacent data points on each sampling line segment have one Euclidean distance with the metacosmic target data point, the Euclidean distance between all the metacosmic adjacent data points and the metacosmic target data point is calculated, the metacosmic adjacent data points with the same Euclidean distance are classified into one class, and each class is marked as a target class.
Obtaining a neighborhood optimal value of each target class according to the number of the metauniverse adjacent data points, the data value and the sparsity of the sampling line segments where the metauniverse adjacent data points are located in each target class, wherein the formula is as follows:
In the method, in the process of the invention, Data value representing the c-th meta-cosmic neighboring data point,/>Data value representing metauniverse target data point,/>Representing sparsity of sampling line segment where the c-th metauniverse adjacent data point is located,/>Representing the number of meta-cosmic neighboring data points within a target class,/>Representing an exponential function based on a natural constant,/>Representing the neighborhood preference value of the target class.
Wherein, the smaller the sparsity of the sampling line segment where the metauniverse adjacent data points are located, the smaller the data change, the larger the neighborhood preference value,The larger the gray value difference is, the larger the color wave of the metacosmic adjacent data point of the target class is, and the smaller the preferable value is.
And obtaining a neighborhood preference value of each target class, and taking the Euclidean distance of the target class corresponding to the maximum neighborhood preference value as the radius of the initial neighborhood of the meta-universe target data point. The neighborhood obtained by taking the meta-universe target data point as the center and taking the Euclidean distance of the target class corresponding to the maximum neighborhood optimal value as the radius can effectively ensure that the volatility of the data points in the neighborhood of the meta-universe target data point is smaller, and the edge and the fine geometric characteristics of the three-dimensional point cloud data set are ensured as much as possible.
Thus, an initial neighborhood radius for each data point is obtained.
Step S004, obtaining the similarity between the sampling line segments according to the data value difference and the distance difference of the data points between the sampling line segments and the sparsity difference of the sampling line segments; constructing an initial neighborhood according to the radius of the initial neighborhood, and acquiring the maximum similarity of each sampling line segment according to the similarity between the sampling line segments; acquiring an adjusted neighborhood radius according to the sparsity and the maximum similarity of the sampling line segments and the data value of the original neighborhood space data point; and finishing filtering and denoising of the three-dimensional point cloud data of each component according to the adjusted neighborhood radius.
According to the steps, the initial neighborhood size of each data point is obtained, the distribution condition of the meta-universe target data points in the neighborhood is only considered in the initial neighborhood, and the texture detail information of the meta-universe data points is not considered, so that the neighborhood of the data points needs to be further adjusted, the meta-universe data points containing more texture detail information reduce the neighborhood preserving details, and the meta-universe data points containing more noise information increase the neighborhood enhancing denoising effect.
Firstly, obtaining the difference of two sampling line segments according to the distance difference and the data value difference of the metauniverse data points on the two sampling line segments, then obtaining the similarity of the two sampling line segments according to the difference and the sparsity difference between the sampling line segments, for one metauniverse target data point, obtaining the description of texture information corresponding to the metauniverse target data point according to the sparsity and the similarity maximum value of all the sampling line segments, marking the description as a first adjusting factor, obtaining the description degree of the metauniverse target data point to noise according to the data value difference of each metauniverse target data point in an initial neighborhood, marking the description degree as the confusion degree, and updating the initial neighborhood size according to the first adjusting factor and the confusion degree of the metauniverse target data point. The formula of the similarity of the sampling line segments is as follows:
In the method, in the process of the invention, Data value representing the ith metacosmic data point in sampled segment A,/>Data value representing the ith metacosmic data point in sample line segment B,/>Representing Euclidean distance of ith metacosmic data point and metacosmic target data point in sampling line segment A,/>Representing the euclidean distance of the ith metacosmic data point from the metacosmic target data point in the sampled segment B,Representing the number of selected metauniverse data points in a sampled line segment,/>Representing the characteristic difference of the sampling line segment A and the sampling line segment B,/>Representing sparsity of sampling line segment A,/>Representing sparsity of sampling line segment B,/>Is a very small integer, prevents denominator from being 0,/>Representing the similarity of sample line segment a to sample line segment B.
The formula of the neighborhood size after the meta-universe target data point adjustment is as follows:
In the method, in the process of the invention, Representing sparseness of the v-th sampled line segment,/>Number of sampled line segments representing metauniverse target data points,/>Maximum value representing similarity of the v-th sampling line segment and the rest sampling line segments,/>First adjustment factor representing metauniverse target data point,/>Data value representing the r-th metacosmic data point within the initial neighborhood of the metacosmic target data point,/>Data value representing metauniverse target data point,/>Representing the number of meta-cosmic data points in the initial neighborhood of the meta-cosmic target data point,/>Representing the degree of confusion of metauniverse target data points,/>Representing a linear normalization function,/>Radius size representing initial neighborhood of metauniverse target data points,/>Representing the radius size of the neighborhood after the metauniverse target data point is adjusted.
Adjusting the neighborhood multiple times according to the formula until the requirement is metStopping adjusting when the adjustment times are more than 20 times, and enabling the neighborhood radius at the moment to be the adjusted neighborhood radius.
The method comprises the steps of obtaining adjusted neighborhood radiuses corresponding to all data points of the metacosmic three-dimensional point cloud data, calculating the inverse of a data value difference value between each metacosmic target data point and each metacosmic data point in the neighborhood of each metacosmic target data point as the similarity between each metacosmic data point and each metacosmic target data point in the neighborhood, taking the ratio of the similarity between each metacosmic data point in the neighborhood and the sum of the similarity between all metacosmic data points in the neighborhood as the weight coefficient of each metacosmic data point in the neighborhood, and carrying out weighted summation on all metacosmic data points in the neighborhood of the metacosmic target data points to obtain a filtering result of the metacosmic target data point.
The above weighting of each metacosmic object data point completes the filtering of all metacosmic data points, which both preserves more accurate edge information and shape characteristics and reduces noise interference.
Thus, filtering and denoising of the universe three-dimensional point cloud data of each component is completed.
And S005, carrying out three-dimensional point cloud data registration and coordinate conversion after filtering and denoising to finish three-dimensional reconstruction.
And carrying out three-dimensional point cloud data registration on the denoised meta space three-dimensional point cloud data by using an ICP algorithm, converting the three-dimensional point cloud data into the same world coordinate system after completing the registration of a plurality of groups of three-dimensional point cloud data, and completing three-dimensional reconstruction, wherein an implementation flow chart for completing the three-dimensional reconstruction is shown in fig. 2, and a three-dimensional model of a real world scene is generated, and the specific process is a known technology and is not repeated herein.
The embodiment provides a meta space scene reconstruction system, which comprises a memory, a processor and a computer program stored in the memory and running on the processor, wherein the processor executes the computer program to realize the methods of the steps S001 to S005.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (10)

1. The meta-universe scene reconstruction method is characterized by comprising the following steps of:
acquiring multi-component universe three-dimensional point cloud data;
Recording any one metacosmic data point in the three-dimensional point cloud data of each group of metacosmic as a metacosmic target data point, setting a plurality of sampling line segments for each metacosmic target data point, and acquiring sparsity of the sampling line segments according to Euclidean distance between the metacosmic data point and the metacosmic target data point in the sampling line segments and data value difference between the metacosmic data point and the metacosmic target data point;
Classifying the meta-cosmic data points in all the sampling line segments according to the Euclidean distance between the meta-cosmic data points in the sampling line segments and the meta-cosmic target data points, acquiring a neighborhood preference value of each type according to the number of the meta-cosmic data points in each type, the data value and the sparsity of the sampling line segments, and taking the Euclidean distance corresponding to the maximum neighborhood preference value as the initial neighborhood radius of the meta-cosmic target data points;
Obtaining the similarity between the sampling line segments according to the data value difference and the distance difference of the data points between the sampling line segments and the sparsity difference of the sampling line segments; constructing an initial neighborhood according to the radius of the initial neighborhood, and acquiring the maximum similarity of each sampling line segment according to the similarity between the sampling line segments; acquiring an adjusted neighborhood radius according to the sparsity and the maximum similarity of the sampling line segments and the data value of the original neighborhood space data point; according to the adjusted neighborhood radius, filtering and denoising of the three-dimensional point cloud data of each component element are completed;
and (3) carrying out three-dimensional point cloud data registration and coordinate conversion after filtering and denoising to finish three-dimensional reconstruction.
2. The metauniverse scene reconstruction method of claim 1 wherein the method of setting a number of sampling line segments for each metauniverse target data point is:
a preset number of sampling line segments are acquired with each metacosmic object data point as a starting point, wherein the sampling line segments must include the preset number of metacosmic data points.
3. The metacosmic scene reconstruction method according to claim 1, wherein the method for obtaining sparsity of the sampling line segment according to euclidean distance between the metacosmic data point and the metacosmic target data point in the sampling line segment and data value difference between the metacosmic data point and the metacosmic target data point comprises the following steps:
Selecting a preset number of meta-cosmic data points closest to the meta-cosmic target data points in the sampling line segment and recording the meta-cosmic data points as meta-cosmic adjacent data points;
In the method, in the process of the invention, Representing Euclidean distance of ith metacosmic neighbor data point and metacosmic target data point,/>Representing the Euclidean distance of the (i+1) -th metacosmic neighbor data point and the metacosmic target data point,/>Data value representing the ith meta-cosmic neighbor data point,/>Data value representing metauniverse target data point,/>Representing the number of selected metauniverse data points in a sampled line segment,/>Representing the maximum Euclidean distance of a metacosmic data point and a metacosmic target data point in a sampling line segment,/>Representing the sparseness of the sampled line segments.
4. The metacosmic scene reconstruction method of claim 3, wherein the method of classifying the metacosmic data points in all the sampled line segments according to euclidean distances between the metacosmic data points and the metacosmic target data points in the sampled line segments is:
And calculating Euclidean distances between each metacosmic adjacent data point and each metacosmic target data point in all sampling line segments, and classifying the metacosmic adjacent data points with the same Euclidean distances into a class to be marked as a target class.
5. The meta-cosmic scene reconstruction method according to claim 3, wherein the method for obtaining the neighborhood preference value of each class according to the number of meta-cosmic data points in each class, the data value and the sparsity of the sampling line segment is as follows:
Calculating the absolute value of the difference value between the data value of each metacosmic adjacent data point and the data value of each metacosmic target data point in each class and recording the absolute value as a first absolute value; and (3) recording the product of the first absolute value corresponding to each metauniverse adjacent data point and the sparsity of the sampling line segment where the first absolute value is positioned as a first product, accumulating the inversely proportional normalized value of the first product, and taking the product of the first product and the number of the metauniverse adjacent data points as a neighborhood optimal value of each class.
6. The meta-universe scene reconstruction method of claim 1 wherein the method for obtaining similarity between sampled line segments based on data value differences, distance differences, and sparsity differences of data points between sampled line segments is as follows:
In the method, in the process of the invention, Data value representing the ith metacosmic data point in sampled segment A,/>Data value representing the ith metacosmic data point in sample line segment B,/>Representing Euclidean distance of ith metacosmic data point and metacosmic target data point in sampling line segment A,/>Representing Euclidean distance of ith metacosmic data point and metacosmic target data point in sampling line segment B,/>Representing the number of selected metauniverse data points in a sampled line segment,/>Representing the difference in characteristics of sample line segment a and sample line segment B,Representing sparsity of sampling line segment A,/>Representing sparsity of sampling line segment B,/>Is a very small integer,/>Representing the similarity of sample line segment a to sample line segment B.
7. The meta-universe scene reconstruction method of claim 1 wherein the method for constructing an initial neighborhood according to an initial neighborhood radius and obtaining the maximum similarity of each sampling line segment according to the similarity between sampling line segments is as follows:
Constructing a circular neighborhood by taking a meta space target data point as a circle center and taking an initial neighborhood radius as a radius, and marking the circular neighborhood as an initial neighborhood;
for any one sampling line segment of the metauniverse target data point is marked as a target sampling line segment, the similarity between the target sampling line segment and all the rest sampling line segments is calculated, and the maximum similarity is marked as the maximum similarity of the target sampling line segment.
8. The meta-cosmic scene reconstruction method according to claim 1, wherein the method for acquiring the adjusted neighborhood radius according to the sparsity of the sampled line segments, the maximum similarity and the data value of the meta-cosmic data point in the initial neighborhood is as follows:
In the method, in the process of the invention, Representing sparseness of the v-th sampled line segment,/>The number of sampled line segments representing the metauniverse target data points,Maximum value representing similarity of the v-th sampling line segment and the rest sampling line segments,/>First adjustment factor representing metauniverse target data point,/>Data value representing the r-th metacosmic data point within the initial neighborhood of the metacosmic target data point,/>Data value representing metauniverse target data point,/>Representing the number of meta-cosmic data points in the initial neighborhood of the meta-cosmic target data point,/>Representing the degree of confusion of metauniverse target data points,/>Representing a linear normalization function,/>Radius size representing initial neighborhood of metauniverse target data points,/>Representing the radius size of the neighborhood after the metauniverse target data point is adjusted.
9. The meta-universe scene reconstruction method as claimed in claim 1, wherein the method for completing filtering denoising of three-dimensional point cloud data of each component according to the adjusted neighborhood radius is as follows:
For each metacosmic object data point, calculating the reciprocal of the data value difference between the metacosmic object data point and each metacosmic data point in the neighborhood as the similarity between each metacosmic data point and the metacosmic object data point in the neighborhood, taking the ratio of the similarity between each metacosmic data point in the neighborhood and the sum of the similarity between all metacosmic data points in the neighborhood as the weight coefficient of each metacosmic data point in the neighborhood, carrying out weighted summation on all metacosmic data points in the neighborhood of the metacosmic object data point to obtain the filtering result of the metacosmic object data point, and carrying out filtering on each metacosmic object data point to finish the filtering denoising of the three-dimensional point cloud data of each component.
10. A meta-cosmic scene reconstruction system comprising a memory, a processor and a computer program stored in said memory and running on said processor, characterized in that said processor implements the steps of a meta-cosmic scene reconstruction method according to any of claims 1-9 when said computer program is executed.
CN202410516528.8A 2024-04-28 2024-04-28 Meta-universe scene reconstruction method and system Pending CN118097037A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410516528.8A CN118097037A (en) 2024-04-28 2024-04-28 Meta-universe scene reconstruction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410516528.8A CN118097037A (en) 2024-04-28 2024-04-28 Meta-universe scene reconstruction method and system

Publications (1)

Publication Number Publication Date
CN118097037A true CN118097037A (en) 2024-05-28

Family

ID=91142594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410516528.8A Pending CN118097037A (en) 2024-04-28 2024-04-28 Meta-universe scene reconstruction method and system

Country Status (1)

Country Link
CN (1) CN118097037A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913396A (en) * 2016-04-11 2016-08-31 湖南源信光电科技有限公司 Noise estimation-based image edge preservation mixed de-noising method
CN115116050A (en) * 2022-08-30 2022-09-27 相国新材料科技江苏有限公司 Manufacturing part appearance online identification method for additive manufacturing equipment
WO2023025030A1 (en) * 2021-08-26 2023-03-02 上海交通大学 Three-dimensional point cloud up-sampling method and system, device, and medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913396A (en) * 2016-04-11 2016-08-31 湖南源信光电科技有限公司 Noise estimation-based image edge preservation mixed de-noising method
WO2023025030A1 (en) * 2021-08-26 2023-03-02 上海交通大学 Three-dimensional point cloud up-sampling method and system, device, and medium
CN115116050A (en) * 2022-08-30 2022-09-27 相国新材料科技江苏有限公司 Manufacturing part appearance online identification method for additive manufacturing equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张彤: ""三维重建点云邻域搜索与滤波算法研究"", 《中国优秀硕士学位论文全文数据库信息科技辑》, 15 January 2017 (2017-01-15), pages 1 - 52 *
李广金等: ""一种结合高斯统计的点云引导滤波算法"", 《制造业自动化》, 30 April 2019 (2019-04-30), pages 80 - 84 *

Similar Documents

Publication Publication Date Title
CN107038717B (en) A method of 3D point cloud registration error is automatically analyzed based on three-dimensional grid
CN110223324B (en) Target tracking method of twin matching network based on robust feature representation
CN110322453B (en) 3D point cloud semantic segmentation method based on position attention and auxiliary network
CN106157330B (en) Visual tracking method based on target joint appearance model
Lange et al. Dld: A deep learning based line descriptor for line feature matching
CN113808277B (en) Image processing method and related device
CN112163990B (en) Significance prediction method and system for 360-degree image
CN111223063A (en) Finger vein image NLM denoising method based on texture features and binuclear function
CN111862278A (en) Animation obtaining method and device, electronic equipment and storage medium
CN116664892A (en) Multi-temporal remote sensing image registration method based on cross attention and deformable convolution
CN106934398B (en) Image de-noising method based on super-pixel cluster and rarefaction representation
CN117456078A (en) Neural radiation field rendering method, system and equipment based on various sampling strategies
Bors et al. Object classification in 3-D images using alpha-trimmed mean radial basis function network
Nousias et al. A saliency aware CNN-based 3D model simplification and compression framework for remote inspection of heritage sites
CN116958420A (en) High-precision modeling method for three-dimensional face of digital human teacher
CN112329662B (en) Multi-view saliency estimation method based on unsupervised learning
CN116993947B (en) Visual display method and system for three-dimensional scene
CN116805353B (en) Cross-industry universal intelligent machine vision perception method
CN113344941A (en) Depth estimation method based on focused image and image processing device
CN118097037A (en) Meta-universe scene reconstruction method and system
CN116543259A (en) Deep classification network noise label modeling and correcting method, system and storage medium
CN107464273B (en) Method and device for realizing image style brush
CN116363175A (en) Polarized SAR image registration method based on attention mechanism
CN114140581A (en) Automatic modeling method and device, computer equipment and storage medium
CN106652048B (en) Three-dimensional model interest point extraction method based on 3D-SUSAN operator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240622

Address after: Room 307, floor 3, block C, No. 8, malianwa North Road, Haidian District, Beijing 100089

Applicant after: Beijing longyifeng Technology Co.,Ltd.

Country or region after: China

Address before: No. 6, 15th Floor, Unit 1, 294 Huale Street, Zhongshan District, Dalian City, Liaoning Province, 116000

Applicant before: Dalian Huiyue High tech Development Co.,Ltd.

Country or region before: China