CN116486037A - CT image slice-based rapid mesh reconstruction method for woven composite material - Google Patents

CT image slice-based rapid mesh reconstruction method for woven composite material Download PDF

Info

Publication number
CN116486037A
CN116486037A CN202310291202.5A CN202310291202A CN116486037A CN 116486037 A CN116486037 A CN 116486037A CN 202310291202 A CN202310291202 A CN 202310291202A CN 116486037 A CN116486037 A CN 116486037A
Authority
CN
China
Prior art keywords
grid
composite material
image
triangle
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310291202.5A
Other languages
Chinese (zh)
Inventor
艾士刚
田玄鑫
蒋仲禾
张衡
李秋波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202310291202.5A priority Critical patent/CN116486037A/en
Publication of CN116486037A publication Critical patent/CN116486037A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/23Design optimisation, verification or simulation using finite element methods [FEM] or finite difference methods [FDM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16CCOMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
    • G16C60/00Computational materials science, i.e. ICT specially adapted for investigating the physical or chemical properties of materials or phenomena associated with their design, synthesis, processing, characterisation or utilisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2113/00Details relating to the application field
    • G06F2113/26Composites

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Hardware Design (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Pure & Applied Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Mathematical Optimization (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a rapid grid reconstruction method for a woven composite material based on CT image slices, and belongs to the field of composite materials. The implementation method of the invention comprises the following steps: dividing an original gray level image by a deep learning means by utilizing the difference of the cross section shapes of the yarns in different directions, and separating the images of warp yarns and weft yarns; screening out contour feature points by iteratively comparing the distances between the nodes and the feature points with feature thresholds, so that the reconstructed grid retains the geometric features of the original model; and (3) carrying out point cloud extraction and grid reconstruction on CT image slices based on a real structure, converting a gray level image into a high-fidelity woven composite material fiber bundle-matrix assembly model, filtering contour edge nodes through polygon fitting, reducing the number of the nodes of the model, and improving the grid reconstruction speed. Compared with the existing modeling method, the woven composite material model obtained by the method is closer to the real structure inside the material, and the prediction precision of the finite element of the subsequent woven composite material can be improved.

Description

CT image slice-based rapid mesh reconstruction method for woven composite material
Technical Field
The invention belongs to the field of composite materials, and relates to a grid model rapid reconstruction method for a CT image of a braided composite material.
Background
CT scanning is one of the nondestructive detection means with highest precision, and is widely applied to detection of defects such as internal pores, cracks and the like of parts, additive manufacturing metals and composite materials. The basic principle is that according to the weakening and absorbing characteristics of rays in different detected objects, when X-rays with certain energy and intensity are used to pass through the objects, detailed information in the objects is obtained through the attenuation rule and distribution condition of the rays in the parts, and finally, image reconstruction is carried out through computer processing, so that images with different gray value distribution are obtained. The image gray scale corresponds to the material, internal structure, composition and density characteristics of the workpiece, and can accurately reflect the shape, position, size and other information of the defect. Therefore, the CT image is accurately and rapidly reconstructed into the grid model, the accuracy of finite element calculation and simulation can be greatly improved, the simulation error caused by an ideal model is corrected, and the prediction result has more reliability and guiding significance.
Current methods for performing microscopic modeling on braided composite materials are mainly divided into two categories: a model is based on CAD parameters, the structure is subjected to standard size geometric modeling, representative software is open source software TexGen developed by the university of Norbuham, england, and two-dimensional and three-dimensional models of different knitting structures can be quickly established by setting yarn sections and specifying yarn tracks based on spline curves. However, the composite material processing technology is complex, a plurality of working procedures are involved from raw materials to finished products, and the final geometric shape is greatly different from preset parameters, so that the model obtained by the method has great error with a real microstructure. The other method is that image processing is carried out on CT image slices, fitting is carried out on the cross section of each fiber bundle through ellipses, the length and the diameter of the ellipses and the center coordinates of each cross section are counted, cross section reconstruction is carried out in CAD software, solid models are obtained through scanning reconstruction along the track of the center point, and then grid division is carried out. The model obtained by the method has larger errors, and format conversion and data transmission are required to be carried out in different commercial software, so that the operation is complicated.
Disclosure of Invention
Because of the complexity of the processing process and the existence of the holes, the fiber bundles and matrix geometry of the woven composite material are greatly different from the standard geometric parameters. The invention mainly aims to provide a rapid grid reconstruction method of a woven composite material based on CT image slicing, which utilizes the difference of the cross section shapes of yarns in different directions, and divides an original gray level image by a deep learning means to separate images of warp yarns and weft yarns; the gray level image is converted into the high-fidelity braided composite material fiber bundle-matrix assembly model through the point cloud extraction and grid reconstruction method, and compared with the conventional modeling method, the braided composite material model obtained by the method is closer to the real structure inside the material, and the prediction precision of the follow-up braided composite material finite element can be improved.
In order to achieve the above purpose, the invention adopts the following technical scheme:
the invention discloses a rapid mesh reconstruction method of a braided composite material based on CT image slices, which comprises the following steps:
step one: and (3) performing CT scanning test on the woven composite material to obtain a gray image slice of the material.
Step two: constructing a convolutional neural network model of a U-net network architecture, wherein the convolutional neural network model adopts a U-shaped architecture, and a low-level feature map containing rich boundary information and a high-level feature map containing rich semantic information are fused together so as to further improve the segmentation precision of the convolutional neural network model. And (3) carrying out semantic segmentation on the composite material image obtained in the step one through the convolutional neural network model, and obtaining a binarized image only comprising elliptical interfaces of warp yarns and weft yarns after segmentation is finished.
The implementation method of the second step is as follows:
step 2.1: marking the original image to construct a training data set, marking elliptical cross sections of warp yarns and weft yarns on X-Y plane and Y-Z plane sections respectively, marking fiber yarns as one type, setting gray values as 1, and marking other areas as other types of gray values as 0.
Step 2.2: a deep convolutional neural network DCNN model based on a U-Net architecture is constructed, the convolutional neural network model adopts a U-shaped architecture, a low-level feature map containing rich boundary information and a high-level feature map containing rich semantic information are fused, so that the segmentation precision of the convolutional neural network model is further improved, and the prediction of pixel point categories can be realized through a small number of picture training sets. The model comprises an encoder block and a decoder block, the encoder is used for extracting image characteristics layer by layer, the decoder recovers image information layer by layer, the structure of the encoder is symmetrical to the encoder, and different convolutional neural network models are built by adjusting the number of layers of the encoding block and the number of filters in the convolutional layer.
A loss function is defined to measure the distance between the predicted result and the marker image, the smaller the value of the loss function when the predicted result is closer to the marker image. In the training process of the convolutional neural network, training parameters (convolutional kernels and the like in the convolutional layer) in the convolutional neural network model are continuously and iteratively updated, so that a defined loss function is smaller and smaller until the loss function converges to a certain preset minimum value, and the training of the convolutional neural network model is judged to be completed. The binary cross entropy function is used as a loss function, specifically defined as:
Where h, w and p are pixel values representing the height, width and at the (i, j) position of the marker image, respectively, and q represents the pixel value of the prediction result at the (i, j) position. The original image and the marked image are in one-to-one correspondence to form a training set, the training set is input into the convolutional neural network model for training, training parameters in the convolutional neural network model are iterated continuously, and training is completed until the defined loss function is converged to a certain preset value. The segmentation performance of the convolutional neural network model is evaluated by comparing the prediction result with the image of the verification set marker, using the pixel precision PA and the average intersection ratio MIoU as evaluation indexes. The pixel accuracy PA is the ratio of the number of correctly classified pixels to the total number of pixels in the CT image, expressed as:
where TP and FP represent the number of correctly classified target class pixels and the number of incorrectly classified target class pixels, respectively. TN and FN represent the correctly classified background class pixel number and the incorrectly classified background class pixel number, respectively.
IoU is the ratio of the intersection of the prediction result with a class in the marker image to its union. MIoU is the average of all classifications IoU used to evaluate the segmentation accuracy of the convolutional neural network model. The MIoU evaluation index is more sensitive to over-segmentation, insufficient segmentation and the like, and the higher the value is, the higher the segmentation accuracy of the convolutional neural network model is. In the classification, the MIoU evaluation index is expressed as:
Step 2.3: training the neural network model constructed in the step 2.2 by using the data set marked in the step 2.1, adopting a GPU (graphic processing unit) to accelerate the training process, adopting RMSprop as an optimizer, and updating training parameters of the convolutional neural network model.
Step 2.4: and (3) respectively carrying out semantic segmentation on the XY plane and YZ plane images obtained in the first scanning by using the neural network model trained in the step (2.3) to obtain binarized section images respectively only containing warp yarns and weft yarns.
Step three: and (3) respectively carrying out boundary recognition and fitting on each yarn section in the image according to the binarized section images of the warp yarns and the weft yarns obtained in the step two, and obtaining point cloud data as input of grid reconstruction.
The method for realizing the point cloud data acquisition in the third step comprises the following steps:
step 3.1: defining ideal grid size d, adopting interval selection to make image processing and extracting image interval I for speeding up image processing process and reducing point cloud data quantity int Defined as closest to dA multiple integer value.
Step 3.2: starting from the first image, every interval I int Zhang Douqu fiber bundle images, selected mesh reconstruction regions are truncated. And (3) determining the edge operation by using a Canny operator through denoising, gradient magnitude and direction calculation and non-maximum suppression and a double-threshold method, and identifying the boundaries of all fiber bundle sections contained in the image to obtain closed contour node data.
Step 3.3: and (3) sequentially taking the contour nodes obtained in the step (3.2) as input, and carrying out feature point identification through a Douglas-Peucker algorithm so as to preserve boundary features in the image: (1) connecting the first and last points of the contour into a straight line, calculating the distances between all the points on the contour and the straight line, and finding out the maximum distance value d max The method comprises the steps of carrying out a first treatment on the surface of the (2) Comparison d max And a threshold value D set in advance: if d max <D, all intermediate points on the contour are completely omitted; the straight line segment is used as an approximation of a curve, and the contour of the segment is processed; if d max Not less than D, reserve D max The corresponding point is taken as a characteristic point, and the curve is divided into two outlines by taking the point as a boundary; (3) repeating the steps (1) and (2) for the two separated profiles respectively until all d max And (5) all the characteristic points are smaller than the threshold value D, and the characteristic points of the curve are extracted.
Step 3.4: fitting the contour according to the contour nodes obtained in the step 3.2, reducing the node data volume to increase the reconstruction efficiency of the grid, and realizing the following steps: (1) starting from the first node, calculating the distance between the next node and the last node and accumulating; (2) when the accumulated distance reaches the grid size d or the current node is the characteristic point obtained in the step 3.3, extracting the corresponding current node as the grid node; (3) setting the current node as the first node, clearing the accumulated distance, and repeating the steps (1) and (2) until all the nodes are traversed.
Step 3.5: calculating the center of the fitting contour node obtained in the step 3.4, calculating each contour center in the current picture by taking the contour sequence identified in the first image as a standard, and obtaining the point cloud data of each yarn by judging the one-to-one correspondence between the shortest distance and the fitting contour data of the previous image.
Step four: and D, taking the point cloud data V obtained in the step three as grid nodes, establishing a connection relation between the nodes through Delaunay triangulation, and realizing the growth of a triangular network and the reconstruction of the point cloud based on a front edge propulsion principle to obtain the fiber bundle surface triangular grid.
The implementation method of the fourth step is as follows:
step 4.1: and D, inputting the point cloud data of the single fiber bundle obtained in the step three as grid nodes, performing Delaunay triangulation on all the nodes, and generating a tetrahedron conforming to the air-outside ball receiving rule, wherein other nodes are not contained in the tetrahedron. Four faces of all tetrahedral grids are extracted, and the repeated triangular grids are deleted and then combined to form an initial surface triangular grid set F.
Step 4.2: and establishing a seed triangle. The grid reconstruction method provided by the invention needs to select a proper seed triangle as a starting unit for propulsion. Adjacent nodes of the first image are sequentially connected, and every two adjacent nodes are used as leading edges. Searching a node closest to the midpoint of the leading edge in the next contour, connecting the leading edge with the node, storing the node as a seed triangle in a grid set T, searching the free edge of the grid in the grid set T as an initial leading edge, and storing the node in a leading edge set A. The free edge is the edge used by only one triangle.
Step 4.3: establishing a grid evaluation function f for prioritizing candidate triangles, wherein f is defined as grid quality, maximum circumcircle radius and angle function of the candidate triangles and the triangles adjacent to the candidate triangles, and is defined as:
f=k 1 α+k 2 β+k 3 γ
k in 1 ,k 2 ,k 3 In order to customize weights to meet the point cloud reconstruction requirements of different geometric features, the condition k must be satisfied 1 +k 2 +k 3 =1, α, β, γ are respectively the mesh quality function, the maximum circumcircle radius function and the angle function with the adjacent triangle, the value ranges are [0,1]Specifically defined as:
wherein a, b and c are lengths of three sides of the candidate triangle.
Wherein R is i And R is a set of all candidate triangle circumcircle radii.
Where θ is the angle of the current candidate triangle with the triangle in the mesh to which it is connected.
Step 4.4: when the current edge set A is not empty, the first front edge is taken for reconstruction, all triangles which the current edge set A belongs to in the grid set T are found to be used as candidate triangles, the triangles T which the current edge set A belongs to in the grid set F are calculated according to the evaluation function established in the step 4.3, and the F values of all the candidate triangles are calculated. Sequencing f from big to small, and sequentially judging the validity of the units, wherein the units are required to meet the following conditions: the vertical coordinates of the three nodes are not equal except the boundary, and the generation of an in-plane grid is avoided through the unequal vertical coordinates of the three nodes; the newly generated edges are not already shared by the two triangles; the angle between the grid and the adjacent grid is not larger than a set threshold value; the newly generated points are not connected to any leading edge.
After finding legal triangles, adding the legal triangles into the grid set T, deleting the current front edge from the grid set F, judging whether the newly generated edges belong to the triangles in the T, and if not, adding the newly generated edges to the tail of the grid set F of the front edge set. If all candidate triangles are illegal, the current leading edge is still deleted. Repeating the operation step 4.4 until all the front edges are empty, and ending the grid reconstruction process.
Step five: the grid is reconstructed by adopting a front edge pushing method, so that holes and abnormal grids are inevitably generated, a closed surface grid is generated through a post-processing process, and input is provided for three-dimensional grid division.
The fifth implementation method comprises the following steps:
step 5.1: hole identification and filling: the edge which is not shared by the two triangles is the free edge, the triangle connected with the free edge is deleted to erode the hole, the free edge connected with the free edge is used as the front edge, and the step 4.4 is repeated to reconstruct the hole. Hole detection and corrosion are carried out again after reconstruction is completed, if holes still exist, hole filling is carried out through a Liepa algorithm, and the implementation process is as follows: (1) and calculating the included angle between the adjacent edges forming the hole. (2) And sequencing the hole edges by angles, and finding the position of the minimum angle of the vertex. (3) And connecting the two sides corresponding to the minimum angle to generate a current side, adding the newly generated triangle into the grid set T, and recording the current triangle. (4) Another point of the edge whose end point is the start point of the current edge is taken as the candidate point of the current edge. (5) And (3) forming a triangle by the current edge and the candidate points, calculating a dihedral angle between the triangle and the current triangle, calculating the area of the triangle, selecting the candidate points with large dihedral angle (selecting the candidate points with large composition area under the condition that the dihedral angle is equal), updating the current edge and the current triangle, continuing the line making step (4), and deleting the previous current edge from the hole boundary. And when the size of the hole boundary is less than or equal to 2, the cycle is exited.
Step 5.2: and (5) optimizing and smoothing the grid filled in the step (5.1) through a Laplacian operator. The essence of Laplacian optimization is the process of encoding and decoding local detail features of the mesh model. The encoding process refers to the conversion of euclidean space coordinates of mesh vertices to Laplacian coordinates. The Laplacian coordinates contain local detail features of the grid, so that the Laplacian grid deformation algorithm can better keep the local detail of the grid model. The decoding process refers to the inverse of euclidean space coordinates by differential coordinates, and is essentially a process for solving a linear system. Through which vertex v i The Laplacian coordinates of (a) are defined as:
in N (i) Representing v i V of (c) j For any adjacent point omega ij As weights, defined as:
omega in ij As the weight coefficient, a uniform weight ω is generally used uij And the cotangent weight omega cij The definition is:
ω uij =1
ω cij =cotα ij +cotβ ij
alpha in the formula ij And beta ij For point v i And v j The composed sides are diagonal within the belonging triangle.
Step six: and D, carrying out surface reconstruction on the binarized images of the warp yarns and the weft yarns obtained in the step two respectively through the steps three to five, and combining to generate a closed surface triangular mesh model of the fiber bundle of the woven composite material.
Step seven: and D, using the triangular mesh on the surface of the fiber bundle obtained in the step six as input through mesh division software, inserting new nodes into the mesh, and generating a tetrahedral mesh of the fiber bundle of the braiding body composite material through forward propulsion.
Preferably, the mesh division software is selected from Gmsh Python API or Hypermesh.
Step eight: in the finite element calculation of the composite material, part of the material is generally intercepted for mechanical property and failure analysis, the outer surface of a matrix which completely surrounds the fiber bundle grid is established, the fiber bundle grid is triangulated by using the grid division software in the step seven and then is used as an outer boundary, the fiber bundle surface grid is used as an inner boundary to be input into the grid division software, new nodes are inserted into the grid, and the tetrahedral grid of the matrix of the woven composite material is obtained through front edge propulsion.
Step nine: and D, establishing a contact relation between the fiber bundle tetrahedron grid obtained in the step seven and the matrix model superposition surface area obtained in the step eight through binding restraint to obtain the fiber bundle-matrix assembly model of the woven composite material. The fiber bundle-matrix assembly model of the composite material is used for representing the section change and the section trend of the fiber bundle in the woven composite material, so that the fiber bundle-matrix assembly model of the composite material is more similar to the real structure in the composite material.
The method also comprises the step ten of: inputting the material fiber bundle-matrix assembly model obtained in the step nine into finite element calculation software, endowing material properties by combining composite material application, setting boundary conditions and loads, and carrying out calculation, wherein the material fiber bundle-matrix assembly model can improve the finite element prediction precision of the braided composite material.
The finite element prediction of the braided composite material comprises braided composite material strength analysis, braided composite material failure analysis, braided composite material fracture analysis and braided composite material impact damage analysis.
The composite material application field comprises the aerospace field, such as an aircraft wing body, a satellite antenna, a supporting structure, a carrier rocket shell, an engine shell and the like; the automotive industry fields, such as automotive bodies, stress members, drive shafts, engine frames, and component parts.
When the composite is applied in the aerospace field, the woven composite strength analysis includes tensile strength, shear strength and compressive strength.
Compared with the existing habit modeling method, the method converts the gray level image into the high-fidelity woven composite material fiber bundle-matrix assembly model, the obtained grid model is closer to the real structure inside the material, the prediction precision of the follow-up woven composite material finite element can be improved, and the prediction result and experimental error are reduced.
The beneficial effects are that:
1. according to the rapid grid reconstruction method for the woven composite material based on the CT image slice, point cloud extraction and grid reconstruction are carried out on the CT image slice based on the real structure, the obtained woven composite material fiber bundle-matrix grid model is closer to the real geometric shape inside the material, the prediction precision of the follow-up woven composite material finite element can be improved, and the error between a prediction result and an experiment is reduced.
2. According to the rapid grid reconstruction method of the woven composite material based on the CT image slice, after edge detection is carried out on warp and weft binary images after CT image slice segmentation through a Canny operator, a Douglas-Peucker algorithm is introduced, and contour feature points are screened out through iterative comparison of node and feature point distances and feature threshold values, so that the reconstructed grid retains the geometric features of an original model, and has higher fidelity. And the outline edge nodes are filtered through polygon fitting, so that the number of the nodes of the model is greatly reduced, and the reconstruction speed of the grid is improved.
3. According to the rapid mesh reconstruction method for the braided composite material based on the CT image slice, disclosed by the invention, the candidate triangle mesh is generated through Delaunay triangulation, so that the time for node selection and unit judgment in a front edge propulsion algorithm is reduced, and the surface reconstruction efficiency is improved. The established grid evaluation function introduces three weights of triangle quality, radius of circumscribed circle and angles of candidate triangle and adjacent triangle, and generates the quality and smoothness of the grid by modifying the weight control. Deleting triangles connected with the free edges of the grids to corrode the holes, carrying out secondary grid reconstruction on the grids by taking the hole edges as the front edges, filling holes still existing after reconstruction through a Liepa algorithm, obtaining closed surface grids, and providing input for body grid division.
4. The invention discloses a rapid grid reconstruction method of a braided composite material based on CT image slicing, which is characterized in that a grid model of a fiber bundle and a matrix is respectively generated based on a surface grid of the fiber bundle, so that the independent calculation and analysis can be realized, the assembly relation can be established in finite element software, and the strength calculation and failure analysis of the braided composite material fiber bundle-matrix assembly model can be developed.
5. The invention discloses a rapid grid reconstruction method of a woven composite material based on CT image slices, which is characterized in that a data set is manufactured by marking elliptical sections of warp yarns and weft yarns in the CT image slices, a binary cross entropy function is used as a loss function, pixel precision PA and average cross union ratio MIoU are used as evaluation indexes, and a convolutional neural network model of a U-net network architecture is trained. The neural network model after training can quickly complete automatic semantic segmentation of the composite material image, and inconvenience of manual segmentation is avoided.
Drawings
FIG. 1 is a flow chart of a method for reconstructing a woven composite material rapid grid based on CT image slices, which is disclosed by the invention;
fig. 2 is a binary image before and after semantic segmentation of a CT image by a convolutional neural network model based on a U-net network architecture, in which: fig. 2 (a) is a gray image of the woven composite material in the X-Y direction, fig. 2 (b) is a warp yarn binarized image after U-Net segmentation, fig. 2 (c) is a gray image of the woven composite material in the Y-Z direction, and fig. 2 (d) is a weft yarn binarized image after U-Net segmentation;
FIG. 3 is a schematic view of point cloud extraction for warp and weft binary images based on the Douglas-Peucker and polygon fitting algorithm, wherein: fig. 3 (a) is a binarized image of warp yarn, fig. 3 (b) is a binarized image of reconstruction area, fig. 3 (c) is a schematic view of fitted contour nodes, and fig. 3 (d) is a point cloud model of extracted single fiber bundles;
FIG. 4 is a schematic diagram of a woven composite fiber bundle-matrix assembly grid model based on point cloud data reconstruction of composite fiber bundles, wherein: fig. 4 (a) is a point cloud model of a single fiber bundle, fig. 4 (b) is a mesh model of a reconstructed single fiber bundle, fig. 4 (c) is a mesh model of all fiber bundles in the X-Y direction, fig. 4 (d) is a mesh model of a woven composite fiber bundle, fig. 4 (e) is a mesh model of a woven composite matrix, and fig. 4 (f) is a woven composite fiber bundle-matrix assembly model.
Detailed Description
For a better description of the objects and advantages of the present invention, the following description will be given with reference to the accompanying drawings and examples.
Example 1:
as shown in fig. 1, the method for reconstructing a woven composite material fast grid based on CT image slices disclosed in this embodiment specifically includes the following implementation steps:
Step one: CT scan test was performed on the plain weave C/SiC composite material to obtain gray scale image slices of the material as shown in FIG. 2 (a), including 989 images in the X-Y direction and 816 images in the Y-Z direction, totaling 1805 images.
Step two: and (3) constructing a convolutional neural network model of a U-net network architecture, and carrying out semantic segmentation on the C/SiC composite material CT gray level image slice obtained in the step one to obtain a binarized image only comprising elliptical interfaces of warp yarns and weft yarns.
The specific implementation method of the second step is as follows:
step 2.1: image Labeler in Matlab marks the original Image with polygons to construct a training dataset. 30 CT gray scale image slices in the X-Y plane are selected to mark the cross section of the warp yarn area, 41Y-Z direction slice mark weft yarns. The oval cross section of the yarn is marked as one type, the gray value is set to be 1, and other areas are marked as other types, and the gray value is set to be 0. The convolutional neural network model requires images with the same size during training, so that the original image and the marked image of the selected mark are cut into images with the size of 896 multiplied by 512, and 71 original images and 71 corresponding marked images are obtained as a training set.
Step 2.2: the method comprises the steps of constructing a deep convolutional neural network DCNN model based on a U-Net architecture, wherein the adopted network model comprises 5 coding blocks, a first layer of the coding blocks consists of an input layer and two convolutional layers containing 64 filters, a second layer of the coding blocks consists of a maximum pooling layer and two convolutional layers containing 128 filters, the third layer to the fifth layer of the coding blocks also contain a maximum pooling layer, and the number of the convolutional layer filters is 256, 512 and 1024 respectively. The first layer of the decoding block consists of one transposed convolutional layer 1024 filters), one concatenated layer, and two convolutional layers (512 filters). The remaining three layers of the decoding block are similar to the first layer, but the number of convolutional layer filters is halved, from 512 to 64 in turn. The fourth layer of the decoding block also comprises an output layer which is compared with the marked image. Each convolution layer is followed by a ReLU layer and a Batch-Normalization layer except the output layer, the convolution kernels in all convolution layers being 3 x 3 in size, the output layer being activated using a Sigmoid function. Defining a loss function for measuring the distance between the predicted result and the marked image, using a binary cross entropy function as the loss function, specifically defined as:
Where p is a pixel value representing the position of the marker image (i, j), and q is a pixel value representing the prediction result at the position of (i, j). Using pixel Precision (PA) and average intersection ratio (MIoU) as evaluation indexes, the prediction results and the images of the verification set markers are compared to evaluate the segmentation performance of the convolutional neural network model. Pixel Accuracy (PA) is the ratio of the number of correctly classified pixels to the total number of pixels in a CT image, expressed for the two classification problem as:
where TP and FP represent the number of correctly classified target class pixels and the number of incorrectly classified target class pixels, respectively. TN and FN represent the correctly classified background class pixel number and the incorrectly classified background class pixel number, respectively.
IoU is the ratio of the intersection of the prediction result with a class in the marker image to its union. MIoU is the average of all classifications IoU, which is used to evaluate the segmentation accuracy of the convolutional neural network model. The MIoU evaluation index is more sensitive to over-segmentation, insufficient segmentation and the like, and the higher the value is, the higher the segmentation accuracy of the convolutional neural network model is. In the classification, the MIoU evaluation index is expressed as:
step 2.3: training the neural network model constructed in the step 2.2 by using the data set marked in the step 2.1, realizing the training by Pytorch 1.7.0 and Python 3.7, and training 100 epochs by adopting a GPU (graphics processing Unit) acceleration training process and a convolutional neural network model. RMSprop was used as an optimizer, and the initial learning rate was set to 0.00001. In the training process, the loss function gradually decreases and converges to below 0.05, the accuracy reaches above 0.98, and the average cross-over ratio (MIoU) gradually increases to above 0.96.
Step 2.4: and (3) using the U-Net neural network model trained in the step (2.3), selecting 989 XY planes (fig. 2 (a)) and 816 YZ plane images (fig. 2 (c)) from the images obtained in the step (one) for semantic segmentation respectively, and obtaining binarized cross-sectional images respectively only containing warp yarns and weft yarns, as shown in fig. 2 (b) and 2 (d).
Step three: and (3) selecting part of the images obtained in the step two to reconstruct, wherein the part of the images comprise 723 XY-direction warp yarns and 571 YZ-direction weft yarn binarized section images, and respectively carrying out boundary recognition and fitting on the section of each yarn in the images to obtain point cloud data as input of grid reconstruction. Wherein the XY direction comprises 9 fiber bundle warp yarns and the YZ direction comprises 13 fiber bundle weft yarns.
The method for realizing the point cloud data acquisition in the third step comprises the following steps:
step 3.1: the individual fiber bundles were approximately 150 pixels wide by 20 pixels high in cross-section. The ideal mesh size d is defined as 15 and the image processing interval is 13.
Step 3.2: from the first image, one fiber bundle image is read per interval 13 image, and the selected mesh reconstruction region is truncated, as shown in fig. 2 (b). And identifying boundaries of all fiber bundle sections contained in the image by using a Canny operator to obtain closed contour node data, wherein the number of nodes of a single contour is about 300, and thus, a large amount of data is required to be subjected to grid reconstruction after point cloud extraction.
Step 3.3: and (3) sequentially taking the contour nodes obtained in the step (3.2) as input, carrying out feature point identification through a Douglas-Peucker algorithm to reserve boundary features in the image, and setting a feature threshold value as 6: (1) connecting the first and last points of the contour into a straight line, calculating the distances between all the points on the contour and the straight line, and finding out the maximum distance value d max The method comprises the steps of carrying out a first treatment on the surface of the (2) Comparison d max Size of threshold 6 set in advance: if d max <D, all intermediate points on the contour are completely omitted; the straight line segment is used as an approximation of a curve, and the contour of the segment is processed; if d max Not less than D, reserve D max The corresponding point is taken as a characteristic point, and the curve is divided into two outlines by taking the point as a boundary; (3) repeating the steps (1) and (2) for the two separated profiles respectively until all d max And (5) all are smaller than a threshold value 6, and the characteristic point extraction of the curve is completed. The profile feature point of the first slice in the X-Y direction is a green square node as in fig. 3 (c), and the main feature point is an end point in the long axis direction for the fiber bundle section.
Step 3.4: fitting the contour according to the contour nodes obtained in the step 3.2, reducing the node data volume to increase the reconstruction efficiency of the grid, and realizing the following steps: (1) starting from the first node, the distance between the next node and the last node is calculated and accumulated. (2) And when the accumulated distance reaches the grid size 15 or the current node is the characteristic point obtained in the step 3.3, extracting the corresponding current node as the grid node. (3) Setting the current node as the first node, clearing the accumulated distance, and repeating the steps (1) and (2) until all the nodes are traversed. Fitting points other than feature points in the first slice in the X-Y direction are like the red circular nodes in FIG. 3 (c).
Step 3.5: calculating the center of the fitting contour node obtained in the step 3.4, calculating each contour center in the current picture by taking the contour sequence identified in the first image as a standard, and obtaining the point cloud data of each yarn by judging the one-to-one correspondence between the shortest distance and the fitting contour data of the previous image.
Step four: and D, taking the point cloud data obtained in the step three as grid nodes, establishing a connection relation between the nodes through Delaunay triangulation, and realizing the growth of a triangular network and the reconstruction of the point cloud based on a front edge propulsion principle to obtain the fiber bundle surface triangular grid.
The implementation method of the fourth step is as follows:
step 4.1: and D, inputting the point cloud data of the single fiber bundle obtained in the step three as grid nodes, performing Delaunay triangulation on all the nodes, and generating a tetrahedron conforming to the air-outside ball receiving rule, wherein other nodes are not contained in the tetrahedron. Four faces of all tetrahedral grids are extracted, and the repeated triangular grids are deleted and then combined to form an initial surface triangular grid set F.
Step 4.2: a seed triangle is established as a starting unit for the advancement. Adjacent nodes of the first image are sequentially connected, and every two adjacent nodes are used as leading edges. Searching a node closest to the midpoint of the leading edge in the next contour, connecting the leading edge with the node, storing the node as a seed triangle in a grid set T, searching the free edge of the grid in the grid set T as an initial leading edge, and storing the node in a leading edge set A. The free edge is the edge used by only one triangle.
Step 4.3: establishing a grid evaluation function f for prioritizing candidate triangles, wherein f is defined as grid quality, maximum circumcircle radius and angle function of the candidate triangles and the triangles adjacent to the candidate triangles, and is defined as:
f=k 1 α+k 2 β+k 3 γ
k in 1 ,k 2 ,k 3 Is a custom weight, where k 1 ,k 2 ,k 3 The values of (a) are 0.05,0.05 and 0.9, and alpha, beta and gamma are respectively grid quality functions, maximum circumcircle radius functions and adjacent triangle angle functions, and are specifically defined as:
wherein a, b and c are lengths of three sides of the candidate triangle.
Wherein R is i And R is a set of all candidate triangle circumcircle radii.
Where θ is the angle of the current candidate triangle with the triangle in the mesh to which it is connected.
Step 4.4: when the current edge set A is not empty, the first front edge is taken for reconstruction, all triangles which the current edge set A belongs to in the grid set T are found to be used as candidate triangles, the triangles T which the current edge set A belongs to in the grid set F are calculated according to the evaluation function established in the step 4.3, and the F values of all the candidate triangles are calculated. Sequencing f from big to small, and sequentially judging the validity of the units, wherein the units are required to meet the following conditions: the vertical coordinates of the three nodes are not equal except the boundary, and the generation of an in-plane grid is avoided through the unequal vertical coordinates of the three nodes; the newly generated edges are not already shared by the two triangles; an angle with the adjacent grid is not greater than 20 degrees; the newly generated points are not connected to any leading edge.
After finding legal triangles, adding the legal triangles into the grid set T, deleting the current front edge from the grid set F, judging whether the newly generated edges belong to the triangles in the T, and if not, adding the newly generated edges to the tail of the grid set F of the front edge set. If all candidate triangles are illegal, the current leading edge is still deleted. Repeating the operation step 4.4 until all the front edges are empty, and ending the grid reconstruction process.
Step five: the grid is reconstructed by adopting a front edge pushing method, so that holes and abnormal grids are inevitably generated, a closed surface grid is generated through a post-processing process, and input is provided for three-dimensional grid division.
The fifth implementation method comprises the following steps:
step 5.1: hole identification and filling: the edge which is not shared by the two triangles is the free edge, the triangle connected with the free edge is deleted to erode the hole, the free edge connected with the free edge is used as the front edge, and the step 4.4 is repeated to reconstruct the hole. Hole detection and corrosion are carried out again after reconstruction is completed, if holes still exist, hole filling is carried out through a Liepa algorithm, and the implementation process is as follows: (1) and calculating the included angle between the adjacent edges forming the hole. (2) And sequencing the hole edges by angles, and finding the position of the minimum angle of the vertex. (3) And connecting the two sides corresponding to the minimum angle to generate a current side, adding the newly generated triangle into the grid set T, and recording the current triangle. (4) Another point of the edge whose end point is the start point of the current edge is taken as the candidate point of the current edge. (5) And forming a triangle by the current edge and the candidate points, calculating a dihedral angle between the triangle and the current triangle, calculating the area of the triangle, selecting the candidate points with large dihedral angle (selecting the candidate points with large composition area under the condition that the dihedral angle is equal), updating the current edge and the current triangle, continuing the 4 th step of line making, and deleting the previous current edge from the hole boundary. And when the size of the hole boundary is less than or equal to 2, the cycle is exited.
Step 5.2: and (5) optimizing and smoothing the grid filled in the step (5.1) through a Laplacian operator. The essence of Laplacian optimization is the process of encoding and decoding local detail features of the mesh model. The encoding process refers to the conversion of euclidean space coordinates of mesh vertices to Laplacian coordinates. The Laplacian coordinates contain local detail features of the grid, so that the Laplacian grid deformation algorithm can better keep the local detail of the grid model. The decoding process refers to the inverse of euclidean space coordinates by differential coordinates, and is essentially a process for solving a linear system. Through which vertex v i The Laplacian coordinates of (a) are defined as:
in N (i) Representing v i V of (c) j For any adjacent point omega ij As weights, defined as:
omega in ij As the weight coefficient, a uniform weight ω is generally used uij And the cotangent weight omega cij The definition is:
ω uij =1
ω cij =cotα ij +cotβ ij
alpha in the formula ij And beta ij For point v i And v j The composed sides are diagonal within the belonging triangle.
Step six: after the binarized image of the single fiber bundle passes through the steps three to five, a closed surface mesh model is generated, as shown in fig. 4 (b), a mesh model of warp yarns and weft yarns is formed by combining node coordinates, as shown in fig. 4 (c), and a surface triangular mesh model of the woven composite fiber bundle is generated, as shown in fig. 4 (c), by combining warp yarns and weft yarns in the direction of geometric view slicing.
Step seven: and (3) programming a body mesh dividing program taking a closed mesh as input based on Python 3.7 through a Gmsh Python API, taking the fiber bundle surface triangular mesh obtained in the step (six) as input, inserting new nodes into the mesh, and generating a tetrahedral mesh of the fiber bundle of the braiding composite material through front edge propulsion.
Step eight: establishing a matrix outer surface grid completely surrounding the fiber bundle grid as an outer boundary as shown in fig. 4 (e), inputting a body grid division program written in step seven as an inner boundary to the fiber bundle surface grid shown in fig. 4 (d), and obtaining a tetrahedral grid of the woven body composite material matrix.
Step nine: and D, importing the fiber bundle tetrahedral mesh obtained in the step seven and the matrix model obtained in the step eight into Abaqus software, and establishing a contact relation in a contact area through Tie constraint to obtain the fiber bundle-matrix assembly model of the woven composite material. The fiber bundles and the matrix are respectively endowed with C and SiC materials, boundary conditions and loads are set, calculation is carried out, and the model can improve the finite element prediction precision of the woven composite material.
While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (9)

1. A rapid mesh reconstruction method of a braided composite material based on CT image slices is characterized by comprising the following steps: comprises the following steps of the method,
step one: CT scanning test is carried out on the braided composite material, and gray image slices of the material are obtained;
step two: constructing a convolutional neural network model of a U-net network architecture, wherein the convolutional neural network model adopts a U-shaped architecture, and a low-level feature map containing rich boundary information and a high-level feature map containing rich semantic information are fused together so as to further improve the segmentation precision of the convolutional neural network model; performing semantic segmentation on the composite material image obtained in the step one through the convolutional neural network model, and obtaining a binarized image only comprising elliptical interfaces of warp yarns and weft yarns after segmentation is finished;
step three: according to the binary sectional images of the warp yarns and the weft yarns obtained in the second step, respectively carrying out boundary recognition and fitting on the section of each yarn in the images to obtain point cloud data as input of grid reconstruction;
step four: the point cloud data obtained in the third step is used as grid nodes, the connection relation between the nodes is established through Delaunay triangulation, the growth of a triangular network and the reconstruction of the point cloud are realized based on a front edge propulsion principle, and the fiber bundle surface triangular grid is obtained;
Step five: reconstructing the grid by adopting a front edge pushing method, so that holes and abnormal grids are inevitably generated, generating a closed surface grid through a post-processing process, and providing input for three-dimensional grid division;
step six: the binarization images of the warp yarns and the weft yarns obtained in the second step are respectively subjected to surface reconstruction in the third to fifth steps, and then a closed surface triangular mesh model of the fiber bundles of the woven composite material is generated by combination;
step seven: the fiber bundle surface triangular grid obtained in the step six is used as input through grid division software, new nodes are inserted into the grid, and tetrahedral grids of the fiber bundle of the braiding body composite material are generated through front edge propulsion;
step eight: in the finite element calculation of the composite material, a part of the material is generally intercepted for mechanical property and failure analysis, the outer surface of a matrix which completely surrounds the fiber bundle grid is established, the fiber bundle grid is triangulated by using grid division software and then is used as an outer boundary, the fiber bundle surface grid is used as an inner boundary and is input into the grid division software, new nodes are inserted into the grid, and a tetrahedral grid of the matrix of the woven composite material is obtained through front edge propulsion;
step nine: establishing a contact relation between the fiber bundle tetrahedron grid obtained in the step seven and the matrix model superposition surface area obtained in the step eight through binding constraint to obtain a woven composite material fiber bundle-matrix assembly model; the fiber bundle-matrix assembly model of the composite material is used for representing the section change and the section trend of the fiber bundle in the woven composite material, so that the fiber bundle-matrix assembly model of the composite material is more similar to the real structure in the composite material.
2. The rapid mesh reconstruction method of a woven composite material based on CT image slices as set forth in claim 1, wherein: and step ten, inputting the material fiber bundle-matrix assembly model obtained in the step nine into finite element calculation software, endowing material properties by combining composite material application, and carrying out calculation after setting boundary conditions and loads, wherein the material fiber bundle-matrix assembly model can improve the prediction precision of the finite element of the braided composite material.
3. A method for rapid mesh reconstruction of a woven composite material based on CT image slices as defined in claim 2, wherein: the composite material application field comprises the aerospace field, such as an aircraft wing body, a satellite antenna, a supporting structure, a carrier rocket shell, an engine shell and the like; the automotive industry fields, such as automotive bodies, stress members, drive shafts, engine frames, and component parts;
when the composite material is applied to the aerospace field, the strength analysis of the braided composite material comprises tensile strength, shear strength, compressive strength and the like.
4. A method for rapid mesh reconstruction of a woven composite material based on CT image slices as claimed in claim 1, 2 or 3, wherein: the implementation method of the second step is that,
Step 2.1: marking an original image to construct a training data set, marking elliptical cross sections of warp yarns and weft yarns on an X-Y plane and a Y-Z plane respectively, marking fiber yarns as one type, setting gray values as 1, and marking other areas as other types of gray values as 0;
step 2.2: constructing a deep convolutional neural network DCNN model based on a U-Net framework, wherein the convolutional neural network model adopts the U-shaped framework, and fuses a low-level characteristic image containing rich boundary information with a high-level characteristic image containing rich semantic information so as to further improve the segmentation precision of the convolutional neural network model, and the prediction of pixel point categories can be realized through a small number of picture training sets; the model comprises an encoder block and a decoder block, the encoder is used for extracting image characteristics layer by layer, the decoder recovers image information layer by layer, the structure of the encoder is symmetrical to the encoder, and different convolutional neural network models are built by adjusting the number of layers of the encoding block and the number of filters in a convolutional layer;
defining a loss function for measuring the distance between the predicted result and the marked image, wherein the smaller the predicted result is, the smaller the value of the loss function is when the predicted result is close to the marked image; in the training process of the convolutional neural network, training parameters in the convolutional neural network model are continuously and iteratively updated, so that a defined loss function is smaller and smaller until the loss function converges to a certain preset minimum value, and the convolutional neural network model is judged to be trained; the binary cross entropy function is used as a loss function, specifically defined as:
Where h, w and p are pixel values representing the height, width and at the (i, j) position of the marker image, respectively, and q represents the pixel value of the prediction result at the (i, j) position; the original image and the marked image are in one-to-one correspondence to form a training set, the training set is input into a convolutional neural network model for training, training parameters in the convolutional neural network model are iterated continuously, and training is completed until a defined loss function is converged to a certain preset value; evaluating the segmentation performance of the convolutional neural network model by comparing the prediction result with the image marked in the verification set, and using the pixel precision PA and the average intersection ratio MIoU as evaluation indexes; the pixel accuracy PA is the ratio of the number of correctly classified pixels to the total number of pixels in the CT image, expressed as:
TP and FP respectively represent the number of correctly classified target class pixels and the number of incorrectly classified target class pixels; TN and FN respectively represent the number of correctly classified background class pixels and the number of incorrectly classified background class pixels;
IoU is the ratio of the intersection of the prediction result with a class in the marker image to the union thereof; MIoU is the average of all classifications IoU used to evaluate the segmentation accuracy of the convolutional neural network model; MIoU evaluation indexes are more sensitive to over-segmentation, insufficient segmentation and the like, and the higher the value is, the higher the segmentation precision of the convolutional neural network model is; in the classification, the MIoU evaluation index is expressed as:
Step 2.3: training the neural network model constructed in the step 2.2 by using the data set marked in the step 2.1, adopting a GPU (graphic processing unit) to accelerate the training process, adopting RMSprop as an optimizer, and updating training parameters of the convolutional neural network model;
step 2.4: and (3) respectively carrying out semantic segmentation on the XY plane and YZ plane images obtained in the first scanning by using the neural network model trained in the step (2.3) to obtain binarized section images respectively only containing warp yarns and weft yarns.
5. The rapid mesh reconstruction method of a woven composite material based on CT image slices as defined in claim 4, wherein: the method for realizing the point cloud data acquisition in the third step comprises the following steps of,
step 3.1: defining ideal grid size d, adopting interval selection to make image processing and extracting image interval I for speeding up image processing process and reducing point cloud data quantity int Defined as closest to dA multiplied integer value;
step 3.2: starting from the first image, every interval I int Zhang Douqu fiber bundle images, intercepting selected mesh reconstruction regions; determining the edge operation by using a Canny operator through denoising, gradient magnitude and direction calculation, non-maximum suppression and a double-threshold method, and identifying the boundaries of all fiber bundle sections contained in the image to obtain closed contour node data;
Step 3.3: and (3) sequentially taking the contour nodes obtained in the step (3.2) as input, and carrying out feature point identification through a Douglas-Peucker algorithm so as to preserve boundary features in the image: (1) connecting the first and last points of the contour into a straight line, calculating the distances between all the points on the contour and the straight line, and finding out the maximum distance value d max The method comprises the steps of carrying out a first treatment on the surface of the (2) Comparison d max And in advance ofThe set threshold D size: if d max <D, all intermediate points on the contour are completely omitted; the straight line segment is used as an approximation of a curve, and the contour of the segment is processed; if d max Not less than D, reserve D max The corresponding point is taken as a characteristic point, and the curve is divided into two outlines by taking the point as a boundary; (3) repeating the steps (1) and (2) for the two separated profiles respectively until all d max The characteristic points of the curves are extracted when the characteristic points are smaller than the threshold value D;
step 3.4: fitting the contour according to the contour nodes obtained in the step 3.2, reducing the node data volume to increase the reconstruction efficiency of the grid, and realizing the following steps: (1) starting from the first node, calculating the distance between the next node and the last node and accumulating; (2) when the accumulated distance reaches the grid size d or the current node is the characteristic point obtained in the step 3.3, extracting the corresponding current node as the grid node; (3) setting the current node as a first node, resetting the accumulated distance, and repeating the steps (1) and (2) until all nodes are traversed;
Step 3.5: calculating the center of the fitting contour node obtained in the step 3.4, calculating each contour center in the current picture by taking the contour sequence identified in the first image as a standard, and obtaining the point cloud data of each yarn by judging the one-to-one correspondence between the shortest distance and the fitting contour data of the previous image.
6. The rapid mesh reconstruction method of a woven composite material based on CT image slices as defined in claim 5, wherein: the implementation method of the fourth step is as follows:
step 4.1: inputting the point cloud data of the single fiber bundle obtained in the step three as grid nodes, performing Delaunay triangulation on all nodes, and generating a tetrahedron conforming to the air-outside ball receiving rule, namely, the interior of the tetrahedron does not contain other nodes; extracting four faces of all tetrahedral grids, deleting repeated triangular grids, and combining to form an initial surface triangular grid set F;
step 4.2: establishing a seed triangle; the grid reconstruction method provided by the invention needs to select a proper seed triangle as a propelling starting unit; adjacent nodes of the first image are sequentially connected, and every two adjacent nodes are used as leading edges; searching a node closest to the midpoint of the front edge in the next contour, connecting the front edge with the node, storing the node as a seed triangle in a grid set T, searching the free edge of the grid in the grid set T as an initial front edge, and storing the node in a front edge set A; the free edge is an edge used by only one triangle;
Step 4.3: establishing a grid evaluation function f for prioritizing candidate triangles, wherein f is defined as grid quality, maximum circumcircle radius and angle function of the candidate triangles and the triangles adjacent to the candidate triangles, and is defined as:
f=k 1 α+k 2 β+k 3 γ
k in 1 ,k 2 ,k 3 In order to customize weights to meet the point cloud reconstruction requirements of different geometric features, the condition k must be satisfied 1 +k 2 +k 3 =1, α, β, γ are respectively the mesh quality function, the maximum circumcircle radius function and the angle function with the adjacent triangle, the value ranges are [0,1]Specifically defined as:
wherein a, b and c are lengths of three sides of the candidate triangle;
wherein R is i The radius R is the radius set of all candidate triangle circumscribed circles;
wherein θ is the angle between the current candidate triangle and the triangle in the mesh connected with the current candidate triangle;
step 4.4: when the current edge set A is not empty, a first front edge is taken for reconstruction, all triangles which the current edge set A belongs to in the grid set T are found to be candidate triangles, the triangles T which the current edge set A belongs to in the grid set F are calculated according to the evaluation function established in the step 4.3, and the F values of all the candidate triangles are calculated; sequencing f from big to small, and sequentially judging the validity of the units, wherein the units are required to meet the following conditions: the vertical coordinates of the three nodes are not equal except the boundary, and the generation of an in-plane grid is avoided through the unequal vertical coordinates of the three nodes; the newly generated edges are not already shared by the two triangles; the angle between the grid and the adjacent grid is not larger than a set threshold value; the newly generated points are not connected to any leading edge;
After finding legal triangles, adding the legal triangles into a grid set T, deleting the current front edge from the grid set F, judging whether newly generated edges belong to the triangles in the T, and if not, adding the newly generated edges to the tail of the grid set F of the front edge set; if all candidate triangles are illegal, deleting the current front edge; repeating the operation step 4.4 until all the front edges are empty, and ending the grid reconstruction process.
7. The rapid mesh reconstruction method of a woven composite material based on CT image slices as defined in claim 6, wherein: the fifth implementation method is that,
step 5.1: hole identification and filling: the edge which is not shared by the two triangles is a free edge, the triangle connected with the free edge is deleted to erode the hole, the free edge connected with the free edge is used as a front edge, and the step 4.4 is repeated to reconstruct the hole; hole detection and corrosion are carried out again after reconstruction is completed, if holes still exist, hole filling is carried out through a Liepa algorithm, and the implementation process is as follows: (1) calculating the included angle between adjacent edges forming the hole; (2) ordering the hole edges by angles, and finding the position of the minimum angle of the vertex; (3) connecting two sides corresponding to the minimum angle to generate a current side, adding the newly generated triangle into a grid set T, and recording the current triangle; (4) another point of the edge taking the starting point of the current edge as the ending point and another point of the edge taking the ending point of the current edge as the starting point are taken as candidate points of the current edge; (5) forming a triangle by the current edge and the candidate points, calculating a dihedral angle between the triangle and the current triangle, calculating the area of the triangle, selecting the candidate points with large dihedral angles, updating the current edge and the current triangle, continuing the line making step (4), and deleting the previous current edge from the hole boundary; when the size of the hole boundary is less than or equal to 2, the circulation is exited;
Step 5.2: optimizing and smoothing the grid filled in the step 5.1 through a Laplacian operator; the essence of Laplacian optimization is the process of encoding and decoding local detail features of a grid model; the coding process refers to conversion from Euclidean space coordinates of grid vertices to Laplacian coordinates; the Laplacian coordinate contains local detail characteristics of the grid, so that the Laplacian grid deformation algorithm can better keep the local detail of the grid model; the decoding process is to reversely calculate Euclidean space coordinates through differential coordinates, and is essentially a process for solving a linear system; through which vertex v i The Laplacian coordinates of (a) are defined as:
in N (i) Representing v i V of (c) j For any adjacent point omega ij As weights, defined as:
omega in ij As the weight coefficient, a uniform weight ω is generally used uij And the cotangent weight omega cij The definition is:
ω uij =1
ω cij =cotα ij +cotβ ij
alpha in the formula ij And beta ij For point v i And v j The composed sides are diagonal within the belonging triangle.
8. A method for rapid mesh reconstruction of a woven composite material based on CT image slices as claimed in claim 1 or 2, wherein: the mesh division software is selected from Gmsh Python API or Hypermesh.
9. The rapid mesh reconstruction method of a woven composite material based on CT image slices as set forth in claim 8, wherein: the finite element prediction of the braided composite material comprises braided composite material strength analysis, braided composite material failure analysis, braided composite material fracture analysis and braided composite material impact damage analysis.
CN202310291202.5A 2023-03-23 2023-03-23 CT image slice-based rapid mesh reconstruction method for woven composite material Pending CN116486037A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310291202.5A CN116486037A (en) 2023-03-23 2023-03-23 CT image slice-based rapid mesh reconstruction method for woven composite material

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310291202.5A CN116486037A (en) 2023-03-23 2023-03-23 CT image slice-based rapid mesh reconstruction method for woven composite material

Publications (1)

Publication Number Publication Date
CN116486037A true CN116486037A (en) 2023-07-25

Family

ID=87214641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310291202.5A Pending CN116486037A (en) 2023-03-23 2023-03-23 CT image slice-based rapid mesh reconstruction method for woven composite material

Country Status (1)

Country Link
CN (1) CN116486037A (en)

Similar Documents

Publication Publication Date Title
US11275353B2 (en) Creating a voxel representation of a three dimensional (3-D) object
CN113850825A (en) Remote sensing image road segmentation method based on context information and multi-scale feature fusion
CN114120102A (en) Boundary-optimized remote sensing image semantic segmentation method, device, equipment and medium
CN109242985B (en) Method for determining key parameters of pore structure from three-dimensional image
CN106202728A (en) Based on Micro CT D braided composites non-homogeneous Voxel grid discrete method
CN111985552B (en) Method for detecting diseases of thin strip-shaped structure of airport pavement under complex background
CN114897781A (en) Permeable concrete pore automatic identification method based on improved R-UNet deep learning
CN110610478A (en) Medical image three-dimensional reconstruction method based on neighborhood topology
Fuchs et al. Generating meaningful synthetic ground truth for pore detection in cast aluminum parts
CN114419284A (en) Fiber reinforced composite three-dimensional reconstruction modeling method based on CT slice image
CN111028335A (en) Point cloud data block surface patch reconstruction method based on deep learning
CN114972759A (en) Remote sensing image semantic segmentation method based on hierarchical contour cost function
CN110264555B (en) Micro-CT-based three-dimensional five-direction woven composite material statistical mesoscopic model establishing method
CN116934780A (en) Deep learning-based electric imaging logging image crack segmentation method and system
CN113029899B (en) Sandstone permeability calculation method based on microscopic image processing
CN108765445B (en) Lung trachea segmentation method and device
Li et al. Reconstructing the 3D digital core with a fully convolutional neural network
CN116486037A (en) CT image slice-based rapid mesh reconstruction method for woven composite material
CN117011175A (en) Mine three-dimensional model point cloud data combined filtering method and medium
CN105701847A (en) Algebraic reconstruction method of improved weight coefficient matrix
CN109785261A (en) A kind of airborne LIDAR three-dimensional filtering method based on gray scale volume element model
CN111696111B (en) 3D model mesh segmentation method based on SSDF attenuation map clustering
Fourrier et al. Automated conformal mesh generation chain for woven composites based on CT-scan images with low contrasts
CN112508441B (en) Urban high-density outdoor thermal comfort evaluation method based on deep learning three-dimensional reconstruction
CN117253021B (en) Method for reconstructing fragment core fracture network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination