CN118470226B - Point cloud image fusion modeling method, system and equipment based on deep learning - Google Patents

Point cloud image fusion modeling method, system and equipment based on deep learning Download PDF

Info

Publication number
CN118470226B
CN118470226B CN202410939624.3A CN202410939624A CN118470226B CN 118470226 B CN118470226 B CN 118470226B CN 202410939624 A CN202410939624 A CN 202410939624A CN 118470226 B CN118470226 B CN 118470226B
Authority
CN
China
Prior art keywords
data
point cloud
cad
modeling
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410939624.3A
Other languages
Chinese (zh)
Other versions
CN118470226A (en
Inventor
苏新新
刘敏
倪见素
高新燕
牟敬芳
田川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Huayun 3d Technology Co ltd
Original Assignee
Shandong Huayun 3d Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Huayun 3d Technology Co ltd filed Critical Shandong Huayun 3d Technology Co ltd
Priority to CN202410939624.3A priority Critical patent/CN118470226B/en
Publication of CN118470226A publication Critical patent/CN118470226A/en
Application granted granted Critical
Publication of CN118470226B publication Critical patent/CN118470226B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a point cloud image fusion modeling method, a system and equipment based on deep learning, belongs to the technical field of point cloud data reverse modeling, and solves the technical problem that the existing CAD modeling method only depends on three-dimensional point cloud data to calculate and is poor in mechanical CAD modeling precision. Comprising the following steps: collecting point cloud data and image data of a universal mechanical device; performing data mapping on the image data and the point cloud data to construct point cloud image fusion data; carrying out data preprocessing on the point cloud image fusion data to construct a point cloud image fusion data set; constructing a CAD reverse modeling model, and training the CAD reverse modeling model through point cloud image fusion data sets; receiving target point cloud image fusion data through a CAD reverse modeling model and carrying out cloud modeling of a target mechanical device; if the obtained CAD model instance does not meet the preset conditions, carrying out parameterization adjustment on the CAD model instance or modifying the cloud image fusion data of the target point and then carrying out cloud modeling again.

Description

Point cloud image fusion modeling method, system and equipment based on deep learning
Technical Field
The invention relates to the technical field of point cloud data reverse modeling, in particular to a point cloud image fusion modeling method, system and equipment based on deep learning.
Background
In the field of manufacturing of general-purpose machines, CAD modeling of manufactured machines by point cloud data is an important feature. The method has the advantages that the data of the three-dimensional point cloud of the universal machine is processed, the actual CAD reverse modeling model is quickly reconstructed, the actual CAD reverse modeling model is compared with the standard CAD reverse modeling model, the error rule is analyzed, the production process is further improved, and the quick and accurate manufacturing of the universal machine can be realized.
However, for the current construction of the actual CAD reverse modeling model, the reverse modeling is only carried out by relying on three-dimensional point cloud data, and the precision loss is easy to be caused in the process of collecting and processing the three-dimensional data, so that the identification of the assembly relation of the parts and the modeling result are caused to be greatly deviated, the accuracy of the actual CAD modeling is reduced, and the analysis result of the error rule is influenced.
Disclosure of Invention
The embodiment of the invention provides a point cloud image fusion modeling method, a system and equipment based on deep learning, which are used for solving the following technical problems: the existing CAD modeling method only depends on three-dimensional point cloud data for calculation, the data basis is single, and the precision of mechanical CAD modeling is poor.
The embodiment of the invention adopts the following technical scheme:
in one aspect, an embodiment of the present invention provides a point cloud image fusion modeling method based on deep learning, where the method includes: acquiring point cloud data and image data of a plurality of universal mechanical devices through a data acquisition device; the data acquisition device comprises a point cloud acquisition device, an image acquisition device, an origin positioning device, an inclination angle measurement device and a data transmission device;
Performing data mapping on the image data and the point cloud data to construct point cloud image fusion data;
Carrying out data preprocessing on the point cloud image fusion data to construct a point cloud image fusion data set;
Constructing a CAD reverse modeling model, and training the CAD reverse modeling model through the point cloud image fusion data set;
Receiving target point cloud image fusion data through a trained CAD reverse modeling model and carrying out cloud modeling on a target mechanical device;
If the CAD model instance obtained by cloud modeling does not meet the preset condition, carrying out parameterization adjustment on the CAD model instance or modifying the target point cloud image fusion data, and then carrying out cloud modeling again.
In a possible implementation manner, the data acquisition device acquires point cloud data and image data of a plurality of universal mechanical devices, and specifically includes:
Determining an origin of a coordinate system through the origin positioning device, and communicating the point cloud acquisition device and the image acquisition device with the origin positioning device respectively in a wireless mode;
determining the coordinate system position of the point cloud acquisition device and the image acquisition device through the origin positioning device;
Determining Euler angles of local coordinate systems of the point cloud acquisition device and the image acquisition device through the inclination angle measurement device, and adjusting the Euler angles of the local coordinate systems of the point cloud acquisition device and the image acquisition device to be consistent;
respectively acquiring point cloud data and image data through the point cloud acquisition device and the image acquisition device;
and transmitting the point cloud data and the image data to a cloud server through the data transmission device in real time through a 5G wireless network.
In a possible implementation manner, the data mapping is performed on the image data and the point cloud data, so as to construct point cloud image fusion data, which specifically includes:
determining coordinate offset [ dx, dy, dz ] of the image acquisition device and the point cloud acquisition device according to the coordinate system positions of the point cloud acquisition device and the image acquisition device;
According to the coordinate offset, carrying out data conversion on the point cloud data: [ x2, y2, z2] = [ x1, y1, z1] - [ dx, dy, dz ]; wherein, [ x1, y1, z1] is the original point cloud data coordinates; [ x2, y2, z2] is the mapped point cloud data coordinates;
and determining pixel data in the image data corresponding to each point cloud data according to the coordinate offset, and coloring the point cloud data according to the color of the corresponding pixel data to obtain the point cloud image fusion data.
In a possible implementation manner, the data preprocessing is performed on the point cloud image fusion data to construct a point cloud image fusion data set, which specifically includes:
Calculating a rotation matrix and a translation matrix between the world coordinate system and the local coordinate system according to the coordinate system position of the point cloud acquisition device and the Euler angle of the local coordinate system; carrying out coordinate system transformation on point cloud image fusion data according to the rotation matrix and the translation matrix;
carrying out data preprocessing on the point cloud image fusion data after the coordinate system transformation; the data preprocessing at least comprises data denoising;
And carrying out data registration on the preprocessed point cloud image fusion data to construct the point cloud image fusion data set.
In a possible implementation manner, the data registration is performed on the preprocessed point cloud image fusion data, and the point cloud image fusion data set is constructed, which specifically includes:
separating the ground points and the non-ground points in the preprocessed point cloud image fusion data by a morphological filtering method;
Performing interpolation calculation on the ground points;
Extracting characteristic points after thinning the non-ground points, searching corresponding matching points in the characteristic points of the fusion data of the cloud images of the two adjacent frames of points, and carrying out data registration according to the corresponding matching points;
integrating the interpolated ground points with the non-ground points after registration, performing scene rendering to form effective point cloud image fusion data, and summarizing the effective point cloud image fusion data into a point cloud image fusion data set.
In a possible implementation manner, a CAD reverse modeling model is constructed, and the CAD reverse modeling model is trained through the point cloud image fusion data set, specifically comprising:
Constructing a CAD reverse modeling model based on a CAD frame of a cloud architecture;
Training the CAD reverse modeling model by fusing the point cloud image with the data set, wherein the training method specifically comprises the following steps of:
Performing feature recognition training on the CAD reverse modeling model so that the CAD reverse modeling model can distinguish part boundaries and recognize parts;
Performing assembly relation training on the CAD reverse modeling model so that the CAD reverse modeling model can identify the assembly constraint relation of the part; wherein the assembly constraint relationship includes at least: concentric, tangential, ball-jointed, hinged, and parallel;
and carrying out CAD brep structural training on the CAD reverse modeling model so that the CAD reverse modeling model can fit an accurate CAD brep topological structure through point cloud image fusion data.
In a possible implementation manner, through a trained CAD reverse modeling model, the cloud image fusion data of the target point is received and cloud modeling of the target mechanical device is performed, and specifically includes:
Acquiring point cloud image fusion data of a target mechanical device, inputting the point cloud image fusion data into a trained CAD reverse modeling model, and identifying a CAD brep topological structure corresponding to the target mechanical device; wherein, the CAD brep topological structure comprises part modeling parameters and part assembly constraint relations;
and calling a CAD-API interface in a cloud CAD platform according to the CAD brep topological structure to perform parameterized modeling, generating a CAD model instance of the target mechanical device, and realizing cloud modeling.
In a possible implementation manner, if a CAD model instance obtained by cloud modeling does not meet a preset condition, performing parameterization adjustment on the CAD model instance or modifying the target point cloud image fusion data, and then re-performing cloud modeling, which specifically includes:
comparing the CAD model instance obtained by cloud modeling with image data corresponding to the target point cloud image fusion data to obtain modeling accuracy of the CAD model instance;
if the modeling accuracy is lower than a preset threshold, the CAD model instance is adjusted by modifying modeling parameters; or re-acquiring cloud image data of the target point and re-modeling to obtain a more accurate CAD model example;
performing supplementary acquisition or adjustment on point cloud data and image data of a model to be modeled so as to enable modeling accuracy of the CAD reverse modeling model to reach the preset threshold;
and linking the CAD model instance meeting the preset condition into a browser of the client side so as to enable a user to browse the CAD model instance in real time.
On the other hand, the embodiment of the invention also provides a point cloud image fusion modeling system based on deep learning, which is characterized by comprising the following steps:
The data acquisition module is used for acquiring point cloud data and image data of a plurality of universal mechanical devices through the data acquisition device; the data acquisition device comprises a point cloud acquisition device, an image acquisition device, an origin positioning device, an inclination angle measurement device and a data transmission device;
The data fusion processing module is used for carrying out data mapping on the image data and the point cloud data to construct point cloud image fusion data; carrying out data preprocessing on the point cloud image fusion data to construct a point cloud image fusion data set;
The cloud CAD modeling module is used for constructing a CAD reverse modeling model and training the CAD reverse modeling model through the point cloud image fusion data set; receiving target point cloud image fusion data through a trained CAD reverse modeling model and carrying out cloud modeling on a target mechanical device; and carrying out parameterization adjustment on the CAD model obtained by cloud modeling, and adjusting point cloud data and image data of the model to be modeled.
Finally, the embodiment of the invention also provides a point cloud image fusion modeling device based on deep learning, which is characterized by comprising the following components:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the one deep learning based point cloud image fusion modeling method.
Compared with the prior art, the point cloud image fusion modeling method, system and equipment based on deep learning provided by the embodiment of the invention have the following beneficial effects:
The invention designs an omnibearing data acquisition device, which performs mapping fusion on acquired point cloud data and image data in modes of origin location, inclination angle measurement and the like, and performs CAD reverse modeling model construction through the fusion data. Compared with a mode of performing reverse modeling by only relying on point cloud data, the method and the device expand the data base of reverse modeling and can greatly improve the modeling accuracy of reverse modeling. And during model training, the invention respectively trains the capacities of the model for boundary feature recognition, part assembly relation recognition and CAD brep topological structure recognition, so that the model can be accurately fitted to the topological structure of the mechanical device, and further accurate cloud modeling is realized.
Finally, the invention also provides a browsing interface capable of browsing the model instance and modifying parameters, a user can perform real-time supplementary acquisition of data or manually modify modeling parameters on the model instance, and final adjustment and control are performed on the CAD model, so that the modeling effect of the model is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings may be obtained according to the drawings without inventive effort to those skilled in the art. In the drawings:
FIG. 1 is a flow chart of a point cloud image fusion modeling method based on deep learning provided by an embodiment of the invention;
fig. 2 is a schematic diagram of a data acquisition device according to an embodiment of the present invention;
FIG. 3 is a training flow chart of a CAD reverse modeling model provided by the embodiment of the invention;
FIG. 4 is a flowchart of a cloud modeling method according to an embodiment of the present invention;
FIG. 5 is a flowchart of the whole process of the point cloud image fusion modeling provided by the embodiment of the invention;
Fig. 6 is a schematic structural diagram of a point cloud image fusion modeling system based on deep learning according to an embodiment of the present invention;
Fig. 7 is a schematic structural diagram of a point cloud image fusion modeling device based on deep learning according to an embodiment of the present invention.
Detailed Description
In order to make the technical solution of the present invention better understood by those skilled in the art, the technical solution of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present invention.
The embodiment of the invention provides a point cloud image fusion modeling method based on deep learning, which specifically comprises the following steps S101-S106 as shown in FIG. 1:
S101, acquiring point cloud data and image data of a plurality of universal mechanical devices through a data acquisition device.
Specifically, the data acquisition device comprises a point cloud acquisition device, an image acquisition device, an origin positioning device, an inclination angle measurement device and a data transmission device.
And determining an origin of the coordinate system through the origin positioning device, and communicating the point cloud acquisition device and the image acquisition device with the origin positioning device respectively in a wireless mode. And determining the coordinate system position of the point cloud acquisition device and the image acquisition device through the origin positioning device. And determining Euler angles of local coordinate systems of the point cloud acquisition device and the image acquisition device through the inclination angle measurement device, and adjusting the Euler angles of the local coordinate systems of the point cloud acquisition device and the image acquisition device to be consistent.
Further, the point cloud data and the image data are respectively acquired through the point cloud acquisition device and the image acquisition device. And transmitting the point cloud data and the image data to a cloud server through a 5G wireless network in real time through a data transmission device.
As a possible implementation manner, fig. 2 is a schematic diagram of a data acquisition device architecture provided by an embodiment of the present invention, where, as shown in fig. 2, the data acquisition device includes a point cloud data acquisition device, an image acquisition device, an origin positioning device, an inclination angle measurement device, and a data transmission device. The origin position of the coordinate system is determined by placing the origin positioning device, and the point cloud acquisition device and the image acquisition device communicate with the origin positioning device in a wireless mode to determine the coordinate system positions of the point cloud acquisition device and the image acquisition device. And then determining Euler angles of local coordinate systems of the point cloud acquisition device and the image acquisition device (camera) through the inclination angle measurement device, wherein the Euler angles of the local coordinate systems of the point cloud acquisition device and the image acquisition device are consistent, and only distance deviation exists. After the world coordinate system is initially determined, point cloud data acquisition and image data acquisition can begin to be performed on the device. The collected data can be transmitted to the cloud end in real time through the 5G wireless network, and the cloud server stores and processes the data.
S102, performing data mapping on the image data and the point cloud data to construct point cloud image fusion data.
Specifically, the coordinate offset [ dx, dy, dz ] of the image acquisition device and the point cloud acquisition device is determined according to the coordinate system positions of the point cloud acquisition device and the image acquisition device. And then, carrying out data conversion on the point cloud data according to the coordinate offset: [ x2, y2, z2] = [ x1, y1, z1] - [ dx, dy, dz ]; wherein, [ x1, y1, z1] is the original point cloud data coordinates; [ x2, y2, z2] is the mapped point cloud data coordinates.
Further, according to the coordinate offset, determining pixel data in the image data corresponding to each point cloud data, and coloring the point cloud data according to the color of the corresponding pixel data to obtain point cloud image fusion data.
As a possible implementation manner, a color point cloud image (i.e., point cloud image fusion data) is constructed through conversion mapping of image data, and the specific process is as follows:
And calculating coordinate offset of the image acquisition device and the point cloud acquisition device as [ dx, dy, dz ].
Assuming that the original point cloud coordinates are [ x1, y1, z1], the point cloud data is converted into [ x2, y2, z2] = [ x1, y1, z1] - [ dx, dy, dz ].
And then converting the converted point cloud data from a Cartesian coordinate system to a polar coordinate system.
And finally, solving pixel plane coordinates in the color image data, performing one-to-one correspondence with the acquired point cloud data according to the pixel plane coordinates, and coloring the point cloud data according to color information of the pixel data corresponding to the point cloud data to obtain a color point cloud image.
S103, carrying out data preprocessing on the point cloud image fusion data to construct a point cloud image fusion data set.
Specifically, calculating a rotation matrix and a translation matrix between a world coordinate system and a local coordinate system according to the position of the coordinate system of the point cloud acquisition device and the Euler angle of the local coordinate system; and carrying out coordinate system transformation on the point cloud image fusion data according to the rotation matrix and the translation matrix.
Further, carrying out data preprocessing on the point cloud image fusion data after the coordinate system transformation; the data preprocessing includes at least data denoising.
As a possible implementation, the point cloud coordinate system is converted: the acquisition of the point cloud data is based on coordinates in the local coordinate system of the data acquisition device, and therefore it is necessary to convert the point cloud data from the local coordinate system to the world coordinate system. According to the method, the rotation and translation matrixes between the world coordinate system and the local coordinate system are calculated in a rigid body transformation mode, and then the two matrixes are substituted into a transformation formula of the local coordinate system and the world coordinate system, so that transformation of the point cloud data coordinate system is realized. The rotation matrix is determined by Euler angles, nutation angles, precession angles (namely precession angles) and rotation angles acquired by the point cloud acquisition device; the translation matrix is determined by the location of the point cloud data.
Further, carrying out data registration on the preprocessed point cloud image fusion data to construct a point cloud image fusion data set, which specifically comprises the following steps:
Separating the ground points and the non-ground points in the preprocessed point cloud image fusion data by a morphological filtering method; performing interpolation calculation on the ground points; extracting characteristic points after thinning the non-ground points, searching corresponding matching points in the characteristic points of the cloud image fusion data of two adjacent frames of points, and carrying out data registration according to the corresponding matching points;
integrating the interpolated ground points with the non-ground points after registration, performing scene rendering to form effective point cloud image fusion data, and summarizing the effective point cloud image fusion data into a point cloud image fusion data set.
As a possible implementation, after converting the coordinate system, a series of preprocessing needs to be performed on the data, so as to reduce noise points that may affect the subsequent modeling effect as much as possible. For the data after preprocessing, morphological filtering is used to separate ground points and non-ground points. Interpolation is performed for the ground points. And for the non-ground points, extracting the characteristic points after thinning, searching the corresponding matching points in the characteristic points of the two adjacent frames of data, and registering according to the corresponding matching points. And integrating the interpolated ground points with the non-ground points after registration, and rendering the scene to form effective point cloud image fusion data.
S104, constructing a CAD reverse modeling model, and training the CAD reverse modeling model through the point cloud image fusion data set.
Specifically, a CAD reverse modeling model is built based on a CAD framework of a cloud architecture.
Further, training a CAD reverse modeling model by fusing the point cloud image with the data set, specifically comprising:
And performing feature recognition training on the CAD reverse modeling model so that the CAD reverse modeling model can distinguish part boundaries and recognize parts. Then, carrying out assembly relation training on the CAD reverse modeling model so that the CAD reverse modeling model can identify the assembly constraint relation of the part; wherein, the assembly constraint relationship at least comprises: concentric, tangential, ball-jointed, hinged, and parallel; and carrying out CAD brep structural training on the CAD reverse modeling model so that the CAD reverse modeling model can fit an accurate CAD brep topological structure through the point cloud image fusion data.
As a possible implementation mode, the deep learning model is trained by taking the point cloud image fusion data and the corresponding CAD model as objects, and the deep learning model can be used for identifying and distinguishing different parts and the assembly relation among the parts, and can infer an accurate CAD brep structure and model according to the point cloud image fusion data. Fig. 3 is a training flow chart of a CAD reverse modeling model provided in an embodiment of the invention, as shown in fig. 3, the training process is as follows:
301. Constructing a point cloud image fusion data set: and acquiring a large amount of general mechanical mapping data, constructing an accurate CAD model based on a CAD frame of a cloud framework, and simultaneously acquiring enough point cloud data and image data by using the data acquisition device to construct a point cloud image fusion data set.
302. Feature recognition training is carried out on the CAD reverse modeling model: the method comprises feature training such as boundary, and the training model can distinguish part boundaries and identify parts.
303. Performing assembly relation training on the CAD reverse modeling model: the training model can identify common assembly constraint relations such as concentricity, tangency, ball joint, hinging, parallelism and the like.
304. Carrying out CAD brep structural training on the CAD reverse modeling model: the training model can be fitted with an accurate CAD model through point cloud image fusion data.
305. Carrying out cloud CAD modeling training on the CAD reverse modeling model: the training model can carry out parameterized modeling on the cloud CAD model according to the topological structure CAD brep, and the model is created by calling an API form, including modeling historical data such as sketch generation, instance generation, assembly and the like.
S105, receiving target point cloud image fusion data through the trained CAD reverse modeling model and carrying out cloud modeling on the target mechanical device.
Specifically, acquiring point cloud image fusion data of a target mechanical device, inputting the point cloud image fusion data into a trained CAD reverse modeling model, and identifying CAD brep topological structures corresponding to the target mechanical device; the CAD brep topological structure comprises part modeling parameters and part assembly constraint relations.
Further, according to CAD brep topological structure, calling CAD-API interface in the cloud CAD platform to carry out parameterized modeling, generating CAD model instance of the target mechanical device, and realizing cloud modeling.
As a possible implementation, the large model has a natural fitness with the three-dimensional CAD platform of the B/S architecture. Fig. 4 is a flowchart of a cloud modeling method provided by the embodiment of the invention, as shown in fig. 4, after the training of a large model is completed, point cloud image fusion data are received, then a cloud CAD platform based on a B/S architecture calls a CAD-API interface to create a project, then a part is created, and an assembly is created. And carrying out sketch generation and parameterized feature extraction on the created parts, and carrying out assembly constraint relation identification on the created assemblies.
And S106, if the CAD model instance obtained by cloud modeling does not meet the preset conditions, carrying out parameterization adjustment on the CAD model instance or modifying the cloud image fusion data of the target point, and then carrying out cloud modeling again.
Specifically, comparing the CAD model instance obtained by cloud modeling with image data corresponding to cloud image fusion data of a target point to obtain modeling accuracy of the CAD model instance. The preset condition is that the modeling accuracy reaches a preset threshold value.
Further, if the modeling accuracy is lower than a preset threshold, the CAD model instance is adjusted by modifying modeling parameters; or re-acquiring the cloud image data of the target point and re-modeling to obtain a more accurate CAD model example.
And carrying out supplementary acquisition or adjustment on the point cloud data and the image data of the model to be modeled so as to enable the modeling precision of the CAD reverse modeling model to reach a preset threshold.
Further, the CAD model instance meeting the preset condition is linked to a browser of the client side, so that a user can browse and adjust the CAD model instance in real time.
As a possible implementation manner, fig. 5 is a flowchart of an overall process of point cloud image fusion modeling provided by the embodiment of the present invention, as shown in fig. 5, data is first acquired by a data acquisition device, and then data is transmitted by a data transmission device. Further, point cloud image fusion and data processing are performed for training the model. After the model is trained, fitting an accurate CAD brep structure according to point cloud image fusion data of the to-be-molded instrument, and then carrying out cloud CAD modeling through a cloud platform. The client-side browses the constructed model in real time through a browser, and can carry out data supplement correction or manual parameter correction according to the model instance to complete modeling.
In addition, the embodiment of the invention also provides a point cloud image fusion modeling system based on deep learning, as shown in fig. 6, the point cloud image fusion modeling system 600 based on deep learning specifically includes:
The data acquisition module 610 is configured to acquire, by using a data acquisition device, point cloud data and image data of a plurality of universal mechanical devices; the data acquisition device comprises a point cloud acquisition device, an image acquisition device, an origin positioning device, an inclination angle measurement device and a data transmission device;
The data fusion processing module 620 is configured to perform data mapping on the image data and the point cloud data, and construct point cloud image fusion data; carrying out data preprocessing on the point cloud image fusion data to construct a point cloud image fusion data set;
the cloud CAD modeling module 630 is used for constructing a CAD reverse modeling model and training the CAD reverse modeling model through the point cloud image fusion data set; and receiving the cloud image fusion data of the target point through the trained CAD reverse modeling model and carrying out cloud modeling on the target mechanical device.
Finally, the embodiment of the invention also provides a point cloud image fusion modeling device based on deep learning, as shown in fig. 7, which specifically comprises:
At least one processor; and a memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform:
Acquiring point cloud data and image data of a plurality of universal mechanical devices through a data acquisition device; the data acquisition device comprises a point cloud acquisition device, an image acquisition device, an origin positioning device, an inclination angle measurement device and a data transmission device;
Performing data mapping on the image data and the point cloud data to construct point cloud image fusion data;
Carrying out data preprocessing on the point cloud image fusion data to construct a point cloud image fusion data set;
Constructing a CAD reverse modeling model, and training the CAD reverse modeling model through the point cloud image fusion data set;
and receiving the cloud image fusion data of the target point through the trained CAD reverse modeling model and carrying out cloud modeling on the target mechanical device.
The embodiments of the present invention are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for apparatus, devices, non-volatile computer storage medium embodiments, the description is relatively simple, as it is substantially similar to method embodiments, with reference to the section of the method embodiments being relevant.
The foregoing describes certain embodiments of the present invention. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing is merely exemplary of the present invention and is not intended to limit the present invention. Various modifications and changes may be made to the embodiments of the invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the embodiments of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. The point cloud image fusion modeling method based on deep learning is characterized by comprising the following steps of:
Acquiring point cloud data and image data of a plurality of universal mechanical devices through a data acquisition device; the data acquisition device comprises a point cloud acquisition device, an image acquisition device, an origin positioning device, an inclination angle measurement device and a data transmission device, and specifically comprises:
Determining an origin of a coordinate system through the origin positioning device, and communicating the point cloud acquisition device and the image acquisition device with the origin positioning device respectively in a wireless mode;
determining the coordinate system position of the point cloud acquisition device and the image acquisition device through the origin positioning device;
Determining Euler angles of local coordinate systems of the point cloud acquisition device and the image acquisition device through the inclination angle measurement device, and adjusting the Euler angles of the local coordinate systems of the point cloud acquisition device and the image acquisition device to be consistent;
respectively acquiring point cloud data and image data through the point cloud acquisition device and the image acquisition device;
Transmitting the point cloud data and the image data to a cloud server through a 5G wireless network in real time by the data transmission device;
Performing data mapping on the image data and the point cloud data to construct point cloud image fusion data, wherein the method specifically comprises the following steps of:
determining coordinate offset [ dx, dy, dz ] of the image acquisition device and the point cloud acquisition device according to the coordinate system positions of the point cloud acquisition device and the image acquisition device;
According to the coordinate offset, carrying out data conversion on the point cloud data: [ x2, y2, z2] = [ x1, y1, z1] - [ dx, dy, dz ]; wherein, [ x1, y1, z1] is the original point cloud data coordinates; [ x2, y2, z2] is the mapped point cloud data coordinates;
determining pixel data in image data corresponding to each point cloud data according to the coordinate offset, and coloring the point cloud data according to the color of the corresponding pixel data to obtain the point cloud image fusion data;
Carrying out data preprocessing on the point cloud image fusion data to construct a point cloud image fusion data set;
constructing a CAD reverse modeling model, and training the CAD reverse modeling model through the point cloud image fusion data set, wherein the CAD reverse modeling model specifically comprises the following steps:
Constructing a CAD reverse modeling model based on a CAD frame of a cloud architecture;
Training the CAD reverse modeling model by fusing the point cloud image with the data set, wherein the training method specifically comprises the following steps of:
Performing feature recognition training on the CAD reverse modeling model so that the CAD reverse modeling model can distinguish part boundaries and recognize parts;
Performing assembly relation training on the CAD reverse modeling model so that the CAD reverse modeling model can identify the assembly constraint relation of the part; wherein the assembly constraint relationship includes at least: concentric, tangential, ball-jointed, hinged, and parallel;
Carrying out CAD brep structural training on the CAD reverse modeling model so that the CAD reverse modeling model can fit an accurate CAD brep topological structure through point cloud image fusion data;
Carrying out cloud CAD modeling training on the CAD reverse modeling model so that the CAD reverse modeling model can carry out parameterized modeling on the cloud CAD model according to the topological structure of CAD brep, and creating the model by calling an API (application program interface) form;
Receiving target point cloud image fusion data through a trained CAD reverse modeling model and carrying out cloud modeling on a target mechanical device;
If the CAD model instance obtained by cloud modeling does not meet the preset condition, carrying out parameterization adjustment on the CAD model instance or modifying the target point cloud image fusion data, and then carrying out cloud modeling again.
2. The deep learning-based point cloud image fusion modeling method of claim 1, wherein the data preprocessing is performed on the point cloud image fusion data to construct a point cloud image fusion dataset, and the method specifically comprises the following steps:
Calculating a rotation matrix and a translation matrix between the world coordinate system and the local coordinate system according to the coordinate system position of the point cloud acquisition device and the Euler angle of the local coordinate system; carrying out coordinate system transformation on point cloud image fusion data according to the rotation matrix and the translation matrix;
carrying out data preprocessing on the point cloud image fusion data after the coordinate system transformation; the data preprocessing at least comprises data denoising;
And carrying out data registration on the preprocessed point cloud image fusion data to construct the point cloud image fusion data set.
3. The deep learning-based point cloud image fusion modeling method of claim 2, wherein the data registration is performed on the preprocessed point cloud image fusion data, and the point cloud image fusion dataset is constructed, specifically comprising:
separating the ground points and the non-ground points in the preprocessed point cloud image fusion data by a morphological filtering method;
Performing interpolation calculation on the ground points;
Extracting characteristic points after thinning the non-ground points, searching corresponding matching points in the characteristic points of the fusion data of the cloud images of the two adjacent frames of points, and carrying out data registration according to the corresponding matching points;
integrating the interpolated ground points with the non-ground points after registration, performing scene rendering to form effective point cloud image fusion data, and summarizing the effective point cloud image fusion data into a point cloud image fusion data set.
4. The deep learning-based point cloud image fusion modeling method of claim 1, wherein the method is characterized by receiving target point cloud image fusion data and performing cloud modeling of a target mechanical device through a trained CAD reverse modeling model, and specifically comprises the following steps:
Acquiring point cloud image fusion data of a target mechanical device, inputting the point cloud image fusion data into a trained CAD reverse modeling model, and identifying a CAD brep topological structure corresponding to the target mechanical device; wherein, the CAD brep topological structure comprises part modeling parameters and part assembly constraint relations;
and calling a CAD-API interface in a cloud CAD platform according to the CAD brep topological structure to perform parameterized modeling, generating a CAD model instance of the target mechanical device, and realizing cloud modeling.
5. The deep learning-based point cloud image fusion modeling method according to claim 1, wherein if a CAD model instance obtained by cloud modeling does not meet a preset condition, performing parameterization adjustment on the CAD model instance or modifying the target point cloud image fusion data, and then re-performing cloud modeling, specifically comprising:
comparing the CAD model instance obtained by cloud modeling with image data corresponding to the target point cloud image fusion data to obtain modeling accuracy of the CAD model instance;
if the modeling accuracy is lower than a preset threshold, the CAD model instance is adjusted by modifying modeling parameters; or re-acquiring cloud image data of the target point and re-modeling to obtain a more accurate CAD model example;
performing supplementary acquisition or adjustment on point cloud data and image data of a model to be modeled so as to enable modeling accuracy of the CAD reverse modeling model to reach the preset threshold;
and linking the CAD model instance meeting the preset condition into a browser of the client side so as to enable a user to browse the CAD model instance in real time.
6. A point cloud image fusion modeling system based on deep learning, the system comprising:
The data acquisition module is used for acquiring point cloud data and image data of a plurality of universal mechanical devices through the data acquisition device; the data acquisition device comprises a point cloud acquisition device, an image acquisition device, an origin positioning device, an inclination angle measurement device and a data transmission device, and specifically comprises:
Determining an origin of a coordinate system through the origin positioning device, and communicating the point cloud acquisition device and the image acquisition device with the origin positioning device respectively in a wireless mode;
determining the coordinate system position of the point cloud acquisition device and the image acquisition device through the origin positioning device;
Determining Euler angles of local coordinate systems of the point cloud acquisition device and the image acquisition device through the inclination angle measurement device, and adjusting the Euler angles of the local coordinate systems of the point cloud acquisition device and the image acquisition device to be consistent;
respectively acquiring point cloud data and image data through the point cloud acquisition device and the image acquisition device;
Transmitting the point cloud data and the image data to a cloud server through a 5G wireless network in real time by the data transmission device;
the data fusion processing module is used for carrying out data mapping on the image data and the point cloud data to construct point cloud image fusion data; performing data preprocessing on the point cloud image fusion data to construct a point cloud image fusion data set, wherein the method specifically comprises the following steps of:
determining coordinate offset [ dx, dy, dz ] of the image acquisition device and the point cloud acquisition device according to the coordinate system positions of the point cloud acquisition device and the image acquisition device;
According to the coordinate offset, carrying out data conversion on the point cloud data: [ x2, y2, z2] = [ x1, y1, z1] - [ dx, dy, dz ]; wherein, [ x1, y1, z1] is the original point cloud data coordinates; [ x2, y2, z2] is the mapped point cloud data coordinates;
determining pixel data in image data corresponding to each point cloud data according to the coordinate offset, and coloring the point cloud data according to the color of the corresponding pixel data to obtain the point cloud image fusion data;
the cloud CAD modeling module is used for constructing a CAD reverse modeling model and training the CAD reverse modeling model through the point cloud image fusion data set, and specifically comprises the following steps:
Constructing a CAD reverse modeling model based on a CAD frame of a cloud architecture;
Training the CAD reverse modeling model by fusing the point cloud image with the data set, wherein the training method specifically comprises the following steps of:
Performing feature recognition training on the CAD reverse modeling model so that the CAD reverse modeling model can distinguish part boundaries and recognize parts;
Performing assembly relation training on the CAD reverse modeling model so that the CAD reverse modeling model can identify the assembly constraint relation of the part; wherein the assembly constraint relationship includes at least: concentric, tangential, ball-jointed, hinged, and parallel;
Carrying out CAD brep structural training on the CAD reverse modeling model so that the CAD reverse modeling model can fit an accurate CAD brep topological structure through point cloud image fusion data;
Carrying out cloud CAD modeling training on the CAD reverse modeling model so that the CAD reverse modeling model can carry out parameterized modeling on the cloud CAD model according to the topological structure of CAD brep, and creating the model by calling an API (application program interface) form;
Receiving target point cloud image fusion data through the trained CAD reverse modeling model and carrying out cloud modeling on a target mechanical device; and carrying out parameterization adjustment on the CAD model obtained by cloud modeling, and adjusting point cloud data and image data of the model to be modeled.
7. A point cloud image fusion modeling apparatus based on deep learning, the apparatus comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform a deep learning based point cloud image fusion modeling method according to any of claims 1-5.
CN202410939624.3A 2024-07-15 2024-07-15 Point cloud image fusion modeling method, system and equipment based on deep learning Active CN118470226B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410939624.3A CN118470226B (en) 2024-07-15 2024-07-15 Point cloud image fusion modeling method, system and equipment based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410939624.3A CN118470226B (en) 2024-07-15 2024-07-15 Point cloud image fusion modeling method, system and equipment based on deep learning

Publications (2)

Publication Number Publication Date
CN118470226A CN118470226A (en) 2024-08-09
CN118470226B true CN118470226B (en) 2024-09-24

Family

ID=92165460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410939624.3A Active CN118470226B (en) 2024-07-15 2024-07-15 Point cloud image fusion modeling method, system and equipment based on deep learning

Country Status (1)

Country Link
CN (1) CN118470226B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110842940A (en) * 2019-11-19 2020-02-28 广东博智林机器人有限公司 Building surveying robot multi-sensor fusion three-dimensional modeling method and system
CN115203935A (en) * 2022-07-12 2022-10-18 厦门大学 Frequency selection surface structure topology inverse prediction method and device based on deep learning

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223091B (en) * 2021-04-29 2023-01-24 达闼机器人股份有限公司 Three-dimensional target detection method, three-dimensional target capture device and electronic equipment
US20230267659A1 (en) * 2022-02-24 2023-08-24 Nvidia Corporation Machine-learning techniques for sparse-to-dense spectral reconstruction
CN114998545A (en) * 2022-07-12 2022-09-02 深圳市水务工程检测有限公司 Three-dimensional modeling shadow recognition system based on deep learning
CN117235929B (en) * 2023-09-26 2024-06-04 中国科学院沈阳自动化研究所 Three-dimensional CAD (computer aided design) generation type design method based on knowledge graph and machine learning
CN117522830A (en) * 2023-11-21 2024-02-06 江苏泰秦新材料有限公司 Point cloud scanning system for detecting boiler corrosion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110842940A (en) * 2019-11-19 2020-02-28 广东博智林机器人有限公司 Building surveying robot multi-sensor fusion three-dimensional modeling method and system
CN115203935A (en) * 2022-07-12 2022-10-18 厦门大学 Frequency selection surface structure topology inverse prediction method and device based on deep learning

Also Published As

Publication number Publication date
CN118470226A (en) 2024-08-09

Similar Documents

Publication Publication Date Title
CN110363858A (en) A kind of three-dimensional facial reconstruction method and system
CN107240129A (en) Object and indoor small scene based on RGB D camera datas recover and modeling method
CN103198477B (en) Apple fruitlet bagging robot visual positioning method
CN113554736B (en) Skeleton animation vertex correction method and model learning method, device and equipment
CN108829232A (en) The acquisition methods of skeleton artis three-dimensional coordinate based on deep learning
CN101154289A (en) Method for tracing three-dimensional human body movement based on multi-camera
CN108629831A (en) 3 D human body method for reconstructing and system based on parametric human body template and inertia measurement
CN112401369B (en) Body parameter measurement method, system, device, chip and medium based on human body reconstruction
CN113077519B (en) Multi-phase external parameter automatic calibration method based on human skeleton extraction
CN107274368B (en) Compatible vision processing system and method
CN117671738B (en) Human body posture recognition system based on artificial intelligence
Wuhrer et al. Landmark-free posture invariant human shape correspondence
CN114066983A (en) Intelligent supplementary scanning method based on two-axis rotary table and computer readable storage medium
CN111833392B (en) Multi-angle scanning method, system and device for mark points
CN108830861A (en) A kind of hybrid optical motion capture method and system
CN114119987A (en) Feature extraction and descriptor generation method and system based on convolutional neural network
CN102479386A (en) Three-dimensional motion tracking method of upper half part of human body based on monocular video
CN113920191B (en) 6D data set construction method based on depth camera
CN118470226B (en) Point cloud image fusion modeling method, system and equipment based on deep learning
CN116758063B (en) Workpiece size detection method based on image semantic segmentation
CN116065638B (en) Stability test method and device for connecting bridge steel piles of expressway
CN116433811A (en) Skeleton intelligent design system of virtual 3D role
CN114399547B (en) Monocular SLAM robust initialization method based on multiframe
CN116310103A (en) Human body posture estimation and grid recovery method based on skin multi-person linear model
CN113160391B (en) Double-stage three-dimensional scene modeling method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant