CN116681895A - Method, system, equipment and medium for segmenting airplane grid model component - Google Patents

Method, system, equipment and medium for segmenting airplane grid model component Download PDF

Info

Publication number
CN116681895A
CN116681895A CN202310708950.9A CN202310708950A CN116681895A CN 116681895 A CN116681895 A CN 116681895A CN 202310708950 A CN202310708950 A CN 202310708950A CN 116681895 A CN116681895 A CN 116681895A
Authority
CN
China
Prior art keywords
triangular
segmentation
aircraft
feature
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310708950.9A
Other languages
Chinese (zh)
Inventor
魏明强
陈赵威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Institute Of Nanjing University Of Aeronautics And Astronautics
Nanjing University of Aeronautics and Astronautics
Original Assignee
Shenzhen Research Institute Of Nanjing University Of Aeronautics And Astronautics
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Institute Of Nanjing University Of Aeronautics And Astronautics, Nanjing University of Aeronautics and Astronautics filed Critical Shenzhen Research Institute Of Nanjing University Of Aeronautics And Astronautics
Priority to CN202310708950.9A priority Critical patent/CN116681895A/en
Publication of CN116681895A publication Critical patent/CN116681895A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a system, equipment and a medium for segmenting an aircraft grid model part, belonging to triangular grid segmentationThe method comprises the following steps: acquisition of aircraft by instrument or modeling of aircraft by software to obtain triangular mesh data P i The method comprises the steps of carrying out a first treatment on the surface of the Extracting the position information and the structure information of the triangular patches into preliminary feature vectors, generating model characterization in a CNN and a transform feature extraction depth network by the feature vectors and adjacency relation, and carrying out feature fusion by utilizing a Fused-Attention module to obtain a feature representation f of each triangular patch; inputting the characteristic representation f of each triangular patch into a segmentation module to carry out segmentation tasks, and obtaining segmentation results of triangular patch category labels. According to the invention, the advantages of CNN and the Transformer are complemented, so that the local and global characteristics of the grid are better extracted, the accuracy of triangular grid segmentation is improved, the robustness and generalization of the segmentation task are promoted, and a segmentation task driving strategy is applied to provide stable and reliable technical support for the aircraft grid segmentation task.

Description

Method, system, equipment and medium for segmenting airplane grid model component
Technical Field
The invention relates to the field of triangular mesh segmentation, in particular to a method, a system, equipment and a medium for segmenting an aircraft mesh model component.
Background
The aircraft mesh model component segmentation segments the entire aircraft model into a plurality of individual components so that the shape, size, and location of each component can be accurately determined and described. Such segmentation may help engineers better understand the structure of the entire aircraft and make different optimization designs for the different components. For example, by modifying the shape and size of a component, the performance of the component may be improved, such as reducing drag, increasing lift-to-drag ratio, etc. In addition, the division of the parts can facilitate the assembly and maintenance of the aircraft by engineers, and each part can be independently disassembled and replaced, so that the maintenance cost and time are reduced.
In recent years, a triangular mesh segmentation method based on CNN has obtained better results, the CNN method focuses on local features and is weaker than a transform method in global feature extraction, and the transform method is stronger in generalization than the CNN method in a large data set, so that the model convergence speed can be increased. Therefore, based on the current research situation, the invention provides a method, a system, a device and a medium for segmenting an aircraft grid model component to solve the above problems.
Disclosure of Invention
In order to solve the problems, a method, a system, equipment and a medium for segmenting an aircraft grid model part are provided, so as to solve the problems of inaccurate segmentation precision and low robustness of the aircraft grid model part by the aircraft grid model part segmentation method in the prior art.
In order to achieve the above purpose, the present invention provides the following technical solutions: the invention provides a method for segmenting an aircraft grid model component, which comprises the following steps:
s1, acquiring an aircraft by using an instrument or modeling the aircraft by using software to acquire triangular mesh data P i
S2, preprocessing the vertexes of the triangular meshes and taking triangular patches as units to acquire position information and structure information of the triangular patches;
s3, extracting the position information and the structure information of the triangular patches into preliminary feature vectors through a multi-layer perceptron;
s4, inputting the feature vector and the adjacent relation into a CNN and a transform feature extraction depth network to generate model representation, and carrying out feature fusion by utilizing a Fused-Attention module to obtain a feature representation f of each triangular patch;
s5, inputting the characteristic representation f of each triangular patch into a segmentation module to carry out segmentation tasks, and obtaining segmentation results of the triangular patch type labels.
Preferably, the instrument in step S1 is a three-dimensional scanner, the three-dimensional scanner collects the object as point cloud data, and the point cloud data is reconstructed into triangular mesh data P through a conventional Marching cube three-dimensional reconstruction algorithm i
Preferably, the software in step S1 is three-dimensional modeling software, and the aircraft is modeled by using the three-dimensional modeling software Blender to obtain P i ={p 1 ,p 2 ,…,p m ,e 1 ,e 2 ,…,e n M and n are respectively the ith triangular mesh data P i The number of vertices and the number of edges.
Preferably, S2 comprises the sub-steps of:
s201, for triangle mesh data P i Normalizing the vertex coordinates;
s202, taking a triangular patch as a unit, acquiring coordinates of a central point of the triangular patch as position information, and taking vectors of the central point pointing to three vertexes, angles of three interior angles and normal vectors of the triangular patch as structural information.
Preferably, S3 comprises the sub-steps of:
s301, extracting position features by position coding the coordinates of the central points of the triangular patches and the adjacent relation between the triangular patches through a position coder, pre-calculating a Laplace feature vector U by using a graph structure, and taking the Laplace feature vector U as the position features of the triangular patches, wherein the feature vector U is defined by a formula of factorization of a Laplace matrix of the graph, and the formula is expressed as follows:
where A is an n adjacency matrix, D is a degree matrix, Λ, U correspond to eigenvalues and eigenvectors, respectively, we use the k smallest non-trivial eigenvectors of a node as its position code and use λ i Representing node i.
S302, taking vectors of triangular patches pointing to three vertexes from central points and angles of three angles as local structure information to be input into a local structure encoder, extracting local structure characteristics, wherein the extracted local structure characteristics are as follows:
wherein v is 1 、v 2 And v 3 For a vector pointing from a center point to three vertices, θ 1 、θ 2 And theta 3 The angle is three angles, and h is a multi-layer perceptron sharing parameters;
s303, inputting the central point coordinates, normal vectors and adjacent relations of triangular patches into a domain structure encoder, respectively carrying out feature aggregation on the triangular patches and three adjacent patches of the triangular patches, extracting three neighborhood features by utilizing a shared multi-layer perceptron, and obtaining the neighborhood structure features through maximum pooling.
Preferably, S4 comprises the sub-steps of:
s401, splicing local structural features of the extracted position features, inputting the local structural features into a transducer feature extraction depth network, and capturing triangular mesh data P by a transducer module by using a self-attention mechanism i And integrating the different local structural features using a multi-headed attentive mechanism to generate a global model representation f extracted from the transducer t ′;
S402, inputting the extracted neighborhood structural features, the position features and the adjacent relation of triangular patches into a CNN feature extraction depth network, wherein each CNN module layer aggregates the triangular patches and the features of three adjacent patches to expand the receptive field four times, stacking four CNN modules to expand the receptive field into 256 triangular patches, and connecting the four CNN modules through residual errors to generate a model representation f extracted from the CNN c
S403, respectively distributing a leavable weight for model characterizations extracted by a Fused-Attention feature fusion module and a conversion and CNN, and adding the model characterizations according to weight coefficients to obtain a feature representation f of each triangular patch, wherein the formula of the feature fusion module is as follows:
(w t ,w c )=enc fus (f t ,f c );
f=w t f t +w c f c
wherein w is t ,w c Is a weight which can be learned, w t +w c =1,f t Representing a global model representation extracted from a transducer, f c Representing a model characterization extracted from the CNN.
Preferably, S5 comprises the sub-steps of:
s501, inputting the characteristic representation f of each triangular patch into a segmentation module for segmentation, applying three full-connection layers and two BatchNorm, relu activation functions and a dropout layer with a coefficient of 0.5 as classifiers, and predicting the category to which the triangular patch belongs;
s502, calculating the cross entropy loss of the predicted category and the real category, so as to monitor the training of the network, wherein the calculation formula of the cross entropy loss is as follows:
where M is the number of categories, y ic As a sign function, P ic To observe the probability that sample i belongs to category c.
Preferably, an aircraft mesh model component segmentation system comprises:
the data set construction module is used for acquiring the aircraft by using an instrument or modeling the aircraft by using software to acquire the triangular mesh data P i
The data set processing module is used for preprocessing the vertexes of the triangular meshes and taking the triangular patches as units to acquire the position information and the structure information of the triangular patches;
the feature extraction module is used for extracting the position information and the structure information of the triangular patches into preliminary feature vectors through the multi-layer perceptron;
the feature fusion module is used for inputting the feature vector and the adjacent relation into the CNN and the Transformer feature extraction depth network to generate model representation, and carrying out feature fusion by utilizing the Fused-Attention module to obtain the feature representation f of each triangular patch;
the segmentation task module is used for inputting the characteristic representation f of each triangular patch into the segmentation module to carry out segmentation tasks, and obtaining segmentation results of the triangular patch category labels.
Preferably, an electronic device includes: a memory for storing a computer program, and a processor that runs the computer program to cause the electronic device to perform the aircraft mesh model component segmentation method of any one of claims 1-7.
Preferably, a computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which when executed by a processor implements the computer grid model component segmentation method of any of claims 1-7.
By the technical scheme, the invention provides a method, a system, equipment and a medium for segmenting an aircraft grid model component. The method has at least the following beneficial effects:
according to the method, the system, the equipment and the medium for segmenting the airplane grid model component, the advantages of the CNN+transformer are complemented through the framework of the CNN+transformer, the spatial relationship and semantic information in the triangular grid are captured, the feature extraction module is designed at two angles of local and neighborhood, the learning of the local features and the global features by the network is promoted, and therefore the accuracy of segmenting the triangular grid is improved, and the robustness, generalization and accuracy of feature extraction are improved; in addition, a segmentation task driving strategy is applied, and a task driving scheme is adopted to segment tasks, so that a novel practical method is provided for the segmentation tasks of the grid curved surface, the learned feature representation serves the downstream segmentation tasks, and a stable and reliable technical support is provided for the segmentation tasks of the aircraft grid model component.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a flow chart of an implementation of a CNN+Transformer-based aircraft mesh model component segmentation method of the present invention;
FIG. 2 is a network architecture diagram of a CNN+Transformer based aircraft mesh model component segmentation method of the present invention;
FIG. 3 is a visual effect diagram of the CNN+Transformer-based aircraft mesh model component segmentation method of the present invention;
fig. 4 is a block diagram of an aircraft mesh model component segmentation system provided by the present invention.
In the figure: 601. a data set construction module; 602. a data set processing module; 603. a feature extraction module; 604. a feature fusion module; 605. and (5) dividing the task module.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide a method, a system, equipment and a medium for segmenting an aircraft grid model part, which are used for solving the problems of inaccurate segmentation precision and low robustness of the aircraft grid model part by the aircraft grid model part segmentation method in the prior art.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Example 1
Referring to fig. 1, fig. 1 is a flowchart of an implementation of a method for dividing an aircraft grid model component based on a cnn+fransformer in the present embodiment of the present invention, which shows a specific implementation of the present embodiment, and the present invention complements the advantages of the cnn+fransformer through the framework of the cnn+fransformer, and facilitates the learning of local features and global features by a network, thereby improving the accuracy of triangular grid division, and improving the robustness, generalization and accuracy of feature extraction.
As shown in fig. 1, the aircraft mesh model component segmentation method includes the following steps:
s1, acquiring an aircraft by using an instrument or modeling the aircraft by using software to acquire triangular mesh data P i
Specifically, the instrument in step S1 is a three-dimensional scanner, the three-dimensional scanner collects the object as point cloud data, and the point cloud data is reconstructed into triangular mesh data P through a traditional Marching cube three-dimensional reconstruction algorithm i
Specifically, the software in step S1 is three-dimensional modeling software, and the aircraft is modeled by using the three-dimensional modeling software Blender to obtain P i ={p 1 ,p 2 ,…,p m ,e 1 ,e 2 ,…,e n M and n are respectively the ith triangular mesh data P i The number of vertices and the number of edges.
S2, preprocessing the vertexes of the triangular meshes and taking triangular patches as units to acquire position information and structure information of the triangular patches;
specifically, S2 includes the following sub-steps:
s201, for triangle mesh data P i Normalizing the vertex coordinates;
s202, taking a triangular patch as a unit, acquiring coordinates of a central point of the triangular patch as position information, and taking vectors of the central point pointing to three vertexes, angles of three interior angles and normal vectors of the triangular patch as structural information.
S3, extracting the position information and the structure information of the triangular patches into preliminary feature vectors through a multi-layer perceptron;
specifically, S3 includes the following substeps:
s301, referring to fig. 2, the position encoder performs position encoding on the coordinates of the center point of the triangular patches and the adjacent relation between the triangular patches to extract the position features, and pre-calculates the laplace feature vector U by using the graph structure, and uses the calculated laplace feature vector as the position features of the triangular patches, wherein the feature vector U is defined by a formula of factorization of a laplace matrix, and the formula is as follows:
wherein A is an n×n adjacency matrix, DFor the degree matrix, Λ, U corresponds to eigenvalues and eigenvectors, respectively, we use the k smallest non-trivial eigenvectors of a node as its position codes, and use λ i Representing node i;
s302, taking vectors of triangular patches pointing to three vertexes from central points and angles of three angles as local structure information to be input into a local structure encoder, extracting local structure characteristics, wherein the extracted local structure characteristics are as follows:
wherein v is 1 、v 2 And v 3 For a vector pointing from a center point to three vertices, θ 1 、θ 2 And theta 3 The angle is three angles, and h is a multi-layer perceptron sharing parameters;
s303, inputting the central point coordinates, normal vectors and adjacent relations of triangular patches into a domain structure encoder, respectively carrying out feature aggregation on the triangular patches and three adjacent patches of the triangular patches, extracting three neighborhood features by utilizing a shared multi-layer perceptron, and obtaining the neighborhood structure features through maximum pooling.
S4, inputting the feature vector and the adjacent relation into a CNN and a transform feature extraction depth network to generate model representation, and carrying out feature fusion by utilizing a Fused-Attention module to obtain a feature representation f of each triangular patch; as shown in fig. 2, a network structure diagram of the method for dividing the aircraft mesh model component based on cnn+fransformer in the present embodiment of the invention is shown.
Specifically, S4 includes the following substeps:
s401, splicing local structural features of the extracted position features, inputting the local structural features into a transducer feature extraction depth network, and capturing triangular mesh data P by a transducer module by using a self-attention mechanism i And integrating the different local structural features using a multi-headed attentive mechanism to generate a global model representation f extracted from the transducer t ′;
S402、The extracted neighborhood structural features, position features and triangular patch adjacency relations are input into a CNN feature extraction depth network, each layer of CNN module aggregates the triangular patches and the features of three adjacent patches to expand the receptive field four times, four layers of CNN modules are stacked to expand the receptive field into 256 triangular patches, and the four layers of CNN modules are connected through residual errors to generate a model representation f extracted from CNN c
S403, respectively distributing a leavable weight for model characterizations extracted by a Fused-Attention feature fusion module and a conversion and CNN, and adding the model characterizations according to weight coefficients to obtain a feature representation f of each triangular patch, wherein the formula of the feature fusion module is as follows:
(w t ,w c )=enc fus (f t ,f c );
f=w t f t +w c f c
wherein w is t ,w c Is a weight which can be learned, w t +w c =1,f t Representing a global model representation extracted from a transducer, f c Representing a model characterization extracted from the CNN.
S5, inputting the characteristic representation f of each triangular patch into a segmentation module to carry out segmentation tasks to obtain segmentation results of triangular patch type labels, so that various parts of the airplane, including parts of a fuselage, wings, tail wings, an engine, landing gear and the like, are segmented, and as shown in FIG. 3, the visual effect diagram of the segmentation method of the airplane grid model part based on CNN+Transformer in the embodiment of the invention is obtained.
Specifically, S5 includes the following substeps:
s501, inputting the characteristic representation f of each triangular patch into a segmentation module for segmentation, applying three full-connection layers and two BatchNorm, relu activation functions and a dropout layer with a coefficient of 0.5 as classifiers, and predicting the category to which the triangular patch belongs;
s502, calculating the cross entropy loss of the predicted category and the real category, so as to monitor the training of the network, wherein the calculation formula of the cross entropy loss is as follows:
where M is the number of categories, y ic As a sign function, P ic To observe the probability that sample i belongs to category c.
Example two
In order to execute the corresponding method of the above embodiment, corresponding functions and technical effects are achieved. An aircraft mesh model component segmentation system is provided below, as shown in fig. 4, comprising: the data set construction module 601, the data set construction module 601 is configured to acquire the triangle mesh data P by using an instrument to collect the aircraft or using software to model the aircraft i The method comprises the steps of carrying out a first treatment on the surface of the The data set processing module 602, the data set processing module 602 is configured to pre-process vertices of the triangular mesh and obtain location information and structure information of triangular patches by taking the triangular patches as units; the feature extraction module 603 is configured to extract, by using a multi-layer perceptron, position information and structure information of the triangular patches into preliminary feature vectors; the feature fusion module 604, the feature fusion module 604 is configured to input the feature vector and the adjacency relation into the CNN and the transform feature extraction depth network to generate a model representation, and perform feature fusion by using the Fused-Attention module to obtain a feature representation f of each triangular patch; the segmentation task module 605, the segmentation task module 605 inputs the characteristic representation f of each triangular patch into the segmentation module to carry out the segmentation task, and a segmentation result of the triangular patch category label is obtained.
Example III
Specifically, an electronic device is characterized by comprising: a memory for storing a computer program, and a processor that runs the computer program to cause the electronic device to perform the aircraft mesh model component segmentation method of any one of claims 1-7.
Example IV
In particular a computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the computer grid model component segmentation method according to any one of claims 1-7.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (10)

1. A method of aircraft mesh model component segmentation comprising the steps of:
s1, acquiring an aircraft by using an instrument or modeling the aircraft by using software to acquire triangular mesh data P i
S2, preprocessing the vertexes of the triangular meshes and taking triangular patches as units to acquire position information and structure information of the triangular patches;
s3, extracting the position information and the structure information of the triangular patches into preliminary feature vectors through a multi-layer perceptron;
s4, inputting the feature vector and the adjacent relation into a CNN and a transform feature extraction depth network to generate model representation, and carrying out feature fusion by utilizing a Fused-Attention module to obtain a feature representation f of each triangular patch;
s5, inputting the characteristic representation f of each triangular patch into a segmentation module to carry out segmentation tasks, and obtaining segmentation results of the triangular patch type labels.
2. The method for segmenting an aircraft mesh model component according to claim 1, wherein the instrument in the step S1 is a three-dimensional scanner, the three-dimensional scanner collects an object as point cloud data, and the point cloud data is reconstructed into triangular mesh data P by a conventional Marching cube three-dimensional reconstruction algorithm i
3. The method for segmenting an aircraft mesh model component according to claim 1, wherein the software in the step S1 is three-dimensional modeling software, and the aircraft is modeled by using the three-dimensional modeling software Blender to obtain P i ={p 1 ,p 2 ,…,p m ,e 1 ,e 2 ,…,e n M and n are respectively the ith triangular mesh data P i The number of vertices and the number of edges.
4. An aircraft mesh model component segmentation method according to claim 1, wherein S2 comprises the sub-steps of:
s201, for triangle mesh data P i Normalizing the vertex coordinates;
s202, taking a triangular patch as a unit, acquiring coordinates of a central point of the triangular patch as position information, and taking vectors of the central point pointing to three vertexes, angles of three interior angles and normal vectors of the triangular patch as structural information.
5. An aircraft mesh model component segmentation method according to claim 1, wherein S3 comprises the sub-steps of:
s301, extracting position features by position coding the coordinates of the central points of the triangular patches and the adjacent relation between the triangular patches through a position coder, pre-calculating a Laplace feature vector U by using a graph structure, and taking the Laplace feature vector U as the position features of the triangular patches, wherein the feature vector U is defined by a formula of factorization of a Laplace matrix of the graph, and the formula is expressed as follows:
wherein: a is an n multiplied by n adjacency matrix, D is a degree matrix, and Λ and U respectively correspond to the eigenvalue and the eigenvector; we use the k smallest non-trivial feature vectors of a node as its position codes and λ i Representing node I;
s302, taking vectors of triangular patches pointing to three vertexes from central points and angles of three angles as local structure information to be input into a local structure encoder, extracting local structure characteristics, wherein the extracted local structure characteristics are as follows:
wherein v is 1 、v 2 And v 3 For a vector pointing from a center point to three vertices, θ 1 、θ 2 And theta 3 The angle is three angles, and h is a multi-layer perceptron sharing parameters;
s303, inputting the central point coordinates, normal vectors and adjacent relations of triangular patches into a domain structure encoder, respectively carrying out feature aggregation on the triangular patches and three adjacent patches of the triangular patches, extracting three neighborhood features by utilizing a shared multi-layer perceptron, and obtaining the neighborhood structure features through maximum pooling.
6. An aircraft mesh model component segmentation method according to claim 1, wherein S4 comprises the sub-steps of:
s401, splicing local structural features of the extracted position features, inputting the local structural features into a transducer feature extraction depth network, and capturing triangular mesh data P by a transducer module by using a self-attention mechanism i And integrating the different local structural features using a multi-headed attentive mechanism to generate a global model representation f extracted from the transducer t ′;
S402, inputting the extracted neighborhood structural features, the position features and the adjacent relation of triangular patches into a CNN feature extraction depth network, wherein each CNN module layer aggregates the triangular patches and the features of three adjacent patches to expand the receptive field four times, stacking four CNN modules to expand the receptive field into 256 triangular patches, and connecting the four CNN modules through residual errors to generate a model representation f extracted from the CNN c ′;
S403, respectively distributing a leavable weight for model characterizations extracted by a Fused-Attention feature fusion module and a conversion and CNN, and adding the model characterizations according to weight coefficients to obtain a feature representation f of each triangular patch, wherein the formula of the feature fusion module is as follows:
(w t ,w c )=enc fus (f t ,f c );
f=w t f t +w c f c
wherein w is t ,w c Is a weight which can be learned, w t +w c =1,f t Representing a global model representation extracted from a transducer, f c Representing a model characterization extracted from the CNN.
7. An aircraft mesh model component segmentation method according to claim 1, wherein S5 comprises the sub-steps of:
s501, inputting the characteristic representation f of each triangular patch into a segmentation module for segmentation, applying three full-connection layers and two BatchNorm, relu activation functions and a dropout layer with a coefficient of 0.5 as classifiers, and predicting the category to which the triangular patch belongs;
s502, calculating the cross entropy loss of the predicted category and the real category, so as to monitor the training of the network, wherein the calculation formula of the cross entropy loss is as follows:
where M is the number of categories, y ic As a sign function, P ic To observe the probability that sample i belongs to category c.
8. An aircraft mesh model component segmentation system, comprising:
the data set construction module is used for acquiring the aircraft by using an instrument or modeling the aircraft by using software to acquire the triangular mesh data P i
The data set processing module is used for preprocessing vertexes of the triangular meshes and taking triangular patches as units to acquire position information and structure information of the triangular patches;
the feature extraction module is used for extracting the position information and the structure information of the triangular patches into preliminary feature vectors through the multi-layer perceptron;
the feature fusion module is used for inputting the feature vector and the adjacent relation into the CNN and the transform feature extraction depth network to generate model representation, and carrying out feature fusion by utilizing the Fused-Attention module to obtain the feature representation f of each triangular patch;
and the segmentation task module inputs the characteristic representation f of each triangular patch into the segmentation module to carry out segmentation tasks, and a segmentation result of the triangular patch category label is obtained.
9. An electronic device, comprising: a memory for storing a computer program, and a processor that runs the computer program to cause the electronic device to perform the aircraft mesh model component segmentation method of any one of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the computer grid model component segmentation method of any one of claims 1-7.
CN202310708950.9A 2023-06-15 2023-06-15 Method, system, equipment and medium for segmenting airplane grid model component Pending CN116681895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310708950.9A CN116681895A (en) 2023-06-15 2023-06-15 Method, system, equipment and medium for segmenting airplane grid model component

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310708950.9A CN116681895A (en) 2023-06-15 2023-06-15 Method, system, equipment and medium for segmenting airplane grid model component

Publications (1)

Publication Number Publication Date
CN116681895A true CN116681895A (en) 2023-09-01

Family

ID=87787023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310708950.9A Pending CN116681895A (en) 2023-06-15 2023-06-15 Method, system, equipment and medium for segmenting airplane grid model component

Country Status (1)

Country Link
CN (1) CN116681895A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993752A (en) * 2023-09-27 2023-11-03 中国人民解放军国防科技大学 Semantic segmentation method, medium and system for live-action three-dimensional Mesh model
CN117315194A (en) * 2023-09-27 2023-12-29 南京航空航天大学 Triangular mesh representation learning method for large aircraft appearance

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190105009A1 (en) * 2017-10-10 2019-04-11 Holo Surgical Inc. Automated segmentation of three dimensional bony structure images
CN110349159A (en) * 2019-06-21 2019-10-18 浙江大学宁波理工学院 3D shape dividing method and system based on the distribution of weight energy self-adaptation
US20210049756A1 (en) * 2019-08-13 2021-02-18 Hong Kong Applied Science and Technology Research Institute Company Limited Medical Image Segmentation Based on Mixed Context CNN Model
CN114092697A (en) * 2021-11-09 2022-02-25 南京林业大学 Building facade semantic segmentation method with attention fused with global and local depth features
CN114638956A (en) * 2022-05-23 2022-06-17 南京航空航天大学 Whole airplane point cloud semantic segmentation method based on voxelization and three-view
CN115482241A (en) * 2022-10-21 2022-12-16 上海师范大学 Cross-modal double-branch complementary fusion image segmentation method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190105009A1 (en) * 2017-10-10 2019-04-11 Holo Surgical Inc. Automated segmentation of three dimensional bony structure images
CN110349159A (en) * 2019-06-21 2019-10-18 浙江大学宁波理工学院 3D shape dividing method and system based on the distribution of weight energy self-adaptation
US20210049756A1 (en) * 2019-08-13 2021-02-18 Hong Kong Applied Science and Technology Research Institute Company Limited Medical Image Segmentation Based on Mixed Context CNN Model
CN114092697A (en) * 2021-11-09 2022-02-25 南京林业大学 Building facade semantic segmentation method with attention fused with global and local depth features
CN114638956A (en) * 2022-05-23 2022-06-17 南京航空航天大学 Whole airplane point cloud semantic segmentation method based on voxelization and three-view
CN115482241A (en) * 2022-10-21 2022-12-16 上海师范大学 Cross-modal double-branch complementary fusion image segmentation method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王本杰;农丽萍;张文辉;林基明;王俊义;: "基于Spider卷积的三维点云分类与分割网络", 计算机应用, vol. 40, no. 06, pages 1607 - 1612 *
罗会兰;张云;: "结合上下文特征与CNN多层特征融合的语义分割", 中国图象图形学报, vol. 24, no. 12, pages 2200 - 2209 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993752A (en) * 2023-09-27 2023-11-03 中国人民解放军国防科技大学 Semantic segmentation method, medium and system for live-action three-dimensional Mesh model
CN117315194A (en) * 2023-09-27 2023-12-29 南京航空航天大学 Triangular mesh representation learning method for large aircraft appearance
CN116993752B (en) * 2023-09-27 2024-01-09 中国人民解放军国防科技大学 Semantic segmentation method, medium and system for live-action three-dimensional Mesh model

Similar Documents

Publication Publication Date Title
CN109410307B (en) Scene point cloud semantic segmentation method
CN116681895A (en) Method, system, equipment and medium for segmenting airplane grid model component
Boulch et al. FKAConv: Feature-kernel alignment for point cloud convolution
CN112651316B (en) Two-dimensional and three-dimensional multi-person attitude estimation system and method
US11544898B2 (en) Method, computer device and storage medium for real-time urban scene reconstruction
CN111209974A (en) Tensor decomposition-based heterogeneous big data core feature extraction method and system
CN114926469A (en) Semantic segmentation model training method, semantic segmentation method, storage medium and terminal
CN111028238A (en) Robot vision-based three-dimensional segmentation method and system for complex special-shaped curved surface
CN115861619A (en) Airborne LiDAR (light detection and ranging) urban point cloud semantic segmentation method and system of recursive residual double-attention kernel point convolution network
EP3859610A1 (en) Deformations basis learning
CN115147798A (en) Method, model and device for predicting travelable area and vehicle
Li et al. Laplacian mesh transformer: Dual attention and topology aware network for 3D mesh classification and segmentation
Li et al. Part-aware product design agent using deep generative network and local linear embedding
Liu et al. A multi-modality sensor system for unmanned surface vehicle
CN114187506A (en) Remote sensing image scene classification method of viewpoint-aware dynamic routing capsule network
CN112668662B (en) Outdoor mountain forest environment target detection method based on improved YOLOv3 network
Zhang An attempt to generate new bridge types from latent space of variational autoencoder
CN111768493B (en) Point cloud processing method based on distribution parameter coding
CN112365456B (en) Transformer substation equipment classification method based on three-dimensional point cloud data
Guo et al. Flight data visualization for simulation & evaluation: a general framework
Zhou et al. Dual attention network for point cloud classification and segmentation
Cai et al. Efficient aerodynamic coefficients prediction with a long sequence neural network
Qingxin et al. Target identification of different task weight under multi-interface and multi-task
Song et al. Point Cloud Classification Network Based on Graph Convolution and Fusion Attention Mechanism
Nandal et al. A Synergistic Framework Leveraging Autoencoders and Generative Adversarial Networks for the Synthesis of Computational Fluid Dynamics Results in Aerofoil Aerodynamics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination