CN112991518A - Three-dimensional reconstruction method for microstructure of non-woven fabric - Google Patents

Three-dimensional reconstruction method for microstructure of non-woven fabric Download PDF

Info

Publication number
CN112991518A
CN112991518A CN202110258327.9A CN202110258327A CN112991518A CN 112991518 A CN112991518 A CN 112991518A CN 202110258327 A CN202110258327 A CN 202110258327A CN 112991518 A CN112991518 A CN 112991518A
Authority
CN
China
Prior art keywords
fiber
image
definition
central axis
woven fabric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110258327.9A
Other languages
Chinese (zh)
Inventor
贺炎
邓娜
刘露露
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Engineering Science
Original Assignee
Shanghai University of Engineering Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Engineering Science filed Critical Shanghai University of Engineering Science
Priority to CN202110258327.9A priority Critical patent/CN112991518A/en
Publication of CN112991518A publication Critical patent/CN112991518A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/68Analysis of geometric attributes of symmetry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30172Centreline of tubular or elongated structure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/16Cloth

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a three-dimensional reconstruction method of a microstructure of non-woven fabric, which specifically comprises the following steps: s1, acquiring a multi-focal-plane sequence image of the non-woven fabric sample; s2, processing the data of the multi-focal-plane sequence image to obtain a depth map of the fiber structure; s3, segmenting the depth map into a plurality of single fibers after data processing, and repairing missing parts caused by shielding in the single fibers; s4, extracting a fiber central axis and a fiber edge of the fiber structure in a three-dimensional space according to the repaired single fiber, and calculating the distance between the fiber central axis and the fiber edge as the fiber radius; and S5, drawing a spherical surface with the radius of the fiber, rolling the spherical surface along the central axis of the fiber, and enveloping the spherical surface to form a three-dimensional model of the tubular fiber. Compared with the prior art, the method has the advantages of reconstructing the three-dimensional structure of the non-woven fabric fiber through the image of the single visual angle, reducing the microscopic appearance to the maximum extent, improving the integrity and the accuracy of the three-dimensional model of the non-woven fabric microstructure and the like.

Description

Three-dimensional reconstruction method for microstructure of non-woven fabric
Technical Field
The invention relates to the technical field of image processing, in particular to a three-dimensional reconstruction method of a microstructure of non-woven fabric.
Background
Unlike conventional textiles, which are fabrics formed from an arrangement and combination of yarns, nonwovens are fiber assemblies made directly from fibers. The filtration performance of a nonwoven fabric depends primarily on its structure, i.e., the manner in which the fibers are packed and arranged. Fiber diameter affects the spacing and packing density between fibers, thereby affecting the filtration efficiency and the pressure resistance of the material; the size of the pore size in the fiber network can affect the size of the particulate matter that can be intercepted; the alignment of fiber orientation within the collection of fibers affects filtration efficiency, and therefore, it is of great importance to accurately obtain such information about the fibers.
The traditional method for analyzing the performance of the non-woven fabric is mainly a physical method and an image processing method, and the physical method is complex to operate and slow in speed; the image processing method is currently mainly based on two-dimensional images, and although fast, depth information of fibers of a nonwoven fabric in the longitudinal direction cannot be obtained.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a three-dimensional reconstruction method of a microstructure of non-woven fabric, so that the microstructure appearance of the non-woven fabric is accurately and effectively reduced.
The purpose of the invention can be realized by the following technical scheme:
a three-dimensional reconstruction method of a microstructure of a non-woven fabric specifically comprises the following steps:
s1, acquiring a multi-focal-plane sequence image of the non-woven fabric sample;
s2, obtaining a depth map of the fiber structure by data processing of the multi-focal-plane sequence image;
s3, segmenting the depth map into a plurality of single fibers after data processing, and repairing missing parts caused by shielding in the single fibers;
s4, extracting a fiber central axis and a fiber edge of the fiber structure in a three-dimensional space according to the repaired single fiber, and calculating the distance between the fiber central axis and the fiber edge as the fiber radius;
and S5, drawing a spherical surface with the radius of the fiber, rolling the spherical surface along the central axis of the fiber, and enveloping the spherical surface to form a three-dimensional model of the tubular fiber.
The step S1 specifically includes the following steps:
s11, acquiring original non-woven fabric information, and cutting out a non-woven fabric sample with a target size according to the original non-woven fabric information;
and S12, acquiring a multi-focal-plane sequence image of the non-woven fabric sample through an optical microscope.
Further, the optical microscope is connected with a digital camera, a stepping motor and a computer.
The step S2 of performing data processing on the multi-focal-plane sequence image by using a depth of focus algorithm to obtain a depth map of the fiber structure specifically includes the following steps:
s21, calculating the definition of each pixel point in the multi-focal-plane sequence image;
s22, extracting the fiber structure meeting the preset definition threshold through threshold segmentation, and removing the background which cannot be focused;
s23, comparing the definition of the pixel points at the same coordinate position of each frame of image, and recording the image layer number of the pixel point with the maximum definition, the image layer numbers of the images before and after the frame of image and the definition of the corresponding pixel point;
and S24, estimating the optimal focusing position of the pixel through a Gaussian interpolation algorithm, and calculating to obtain a depth map of the fiber structure according to the optimal focusing position of the pixel.
Further, in step S21, the sharpness is calculated by using Sobel operator as a sharpness evaluation function, and the specific formula is as follows:
Gx=f(x+1,y-1)+2*f(x+1,y)+f(x+1,y+1)-f(x-1,y-1)+2*f(x-1,y)+f(x-1,y+1)
Gy=f(x-1,y-1)+2*f(x,y-1)+f(x+1,y-1)-f(x-1,y+1)+2*f(x,y+1)+f(x+1,y+1)
Figure BDA0002968869210000021
wherein G isxAnd GyThe gradient of the pixel point in the vertical direction and the horizontal direction is respectively shown, (x, y) are coordinates of the pixel point, G is the gradient value of the pixel point, and the definition of the pixel point is reflected through the gradient value of the pixel point.
Further, the calculation formula of the best focus position of the pixel in step S24 is as follows:
Figure BDA0002968869210000022
wherein,
Figure BDA0002968869210000031
m is the layer number of the pixel point corresponding to the maximum definition value, Fm-1、Fm、Fm+1The definition of the maximum value of the definition and the definition of the same coordinate position on the front image and the rear image satisfies Fm≥Fm-1,Fm-1≥Fm+1,dm、dm-1、dm+1The sequence number of the image with the maximum definition and the sequence numbers of two adjacent frames of images are respectively.
The step S3 specifically includes the following steps:
s31, processing the depth map according to a region growing algorithm to divide a plurality of single fibers;
s32, marking fiber communication domains among the single fibers according to a boundary tracking algorithm;
s33, the fiber connection domain is connected to complete the repair of the missing part caused by the occlusion.
The step S4 specifically includes the following steps:
s41, extracting a fiber skeleton of the fiber structure through an iterative refinement algorithm according to the repaired single fiber;
s42, removing the branches of the fiber framework to obtain the fiber central axis of the fiber structure on the two-dimensional image;
s43, calculating the depth value of the fiber medial axis in the three-dimensional space on the two-dimensional image;
s44, calculating to obtain the vertical coordinate of the fiber middle axis through a polynomial curve fitting function according to the depth value;
and S45, calculating the distance between the central axis of the fiber and the edge of the fiber as the radius of the fiber according to the ordinate of the central axis of the fiber.
Further, in the step S41, binarizing the repaired image of the single fiber to obtain a binary image, extracting a fiber skeleton from the binary image according to an iterative refinement algorithm, deleting a pixel point on the boundary of the object, and performing multiple iterations until the image is no longer changed, so that the image is contracted to a line with minimum connectivity, but the object is not allowed to be split, wherein each iteration of the iterative refinement algorithm includes a first sub-iteration and a second sub-iteration, and in the first sub-iteration, the pixel point is deleted only when all of the determination conditions G1, G2 and G3 are satisfied; in the second sub-iteration, pixels are deleted if and only if all of the decision conditions G1, G2, and G3 'are satisfied, the formulas for the decision conditions G1, G2, G3, and G3' are specifically as follows:
determination condition G1:
XH(p)=1
where p is a pixel point on the boundary of the object, XH(p) satisfies:
Figure BDA0002968869210000032
Figure BDA0002968869210000033
determination condition G2:
2≤min{n1(p),n2(p)}≤3
wherein n is1(p) and n2(p) satisfies:
Figure BDA0002968869210000041
Figure BDA0002968869210000042
determination condition G3:
Figure BDA0002968869210000043
determination condition G3':
Figure BDA0002968869210000044
wherein x is1、x2、...、x8Are the values of eight neighbors of p, numbered in anti-clockwise order starting from the right neighbor.
Further, the specific formula of the polynomial curve fitting function in step S44 is as follows:
p(x)=p1xn+p2xn-1+…+pnx+p(n+1)
wherein n represents the order of the polynomial function, p1, p2 and p3 … p (n +1) are constants, the corresponding constants are solved by a least square method, and the corresponding ordinate of each pixel of the fiber central axis on the fitting curve is acquired to obtain the ordinate of the fiber central axis in the three-dimensional space.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the invention, the depth map is obtained by using the depth of focus algorithm, on the basis, the single fiber is divided, the coordinate of the central axis of the fiber in a three-dimensional space is extracted, and the radius of the fiber is calculated, so that the three-dimensional model of the fiber is reconstructed, and the efficiency and the accuracy of establishing the three-dimensional model of the microstructure of the non-woven fabric are improved.
2. In the step of extracting the single fiber, connected domains are marked through a boundary tracking algorithm and are connected, so that the part of fiber loss caused by shielding is repaired; meanwhile, the problem of depth information loss caused by mutual shielding of fibers is solved through polynomial curve fitting, and the precision and the integrity of the three-dimensional model of the microstructure of the non-woven fabric are effectively improved.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic diagram of 9 plane images of a multi-focal plane sequence of images in an embodiment of the present invention;
FIG. 3 is a schematic illustration of a depth map of a fiber structure in an embodiment of the present invention;
FIG. 4 is a schematic representation of a single fiber after being segmented and repaired in an embodiment of the present invention;
FIG. 5 is a schematic view of a fiber central axis in an embodiment of the present invention;
FIG. 6 is a schematic representation of a three-dimensional model of an individual fiber in an embodiment of the present disclosure;
FIG. 7 is a schematic representation of a complete three-dimensional model of a nonwoven sample in an embodiment of the invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
Examples
As shown in fig. 1, a three-dimensional reconstruction method of a microstructure of a nonwoven fabric specifically includes the following steps:
s1, acquiring a multi-focal-plane sequence image of the non-woven fabric sample;
s2, processing the data of the multi-focal-plane sequence image to obtain a depth map of the fiber structure;
s3, segmenting the depth map into a plurality of single fibers after data processing, and repairing missing parts caused by shielding in the single fibers;
s4, extracting a fiber central axis and a fiber edge of the fiber structure in a three-dimensional space according to the repaired single fiber, and calculating the distance between the fiber central axis and the fiber edge as the fiber radius;
and S5, drawing a spherical surface with the radius of the fiber, rolling the spherical surface along the central axis of the fiber, and enveloping the spherical surface to form a three-dimensional model of the tubular fiber, as shown in FIG. 7.
Step S1 specifically includes the following steps:
s11, acquiring original non-woven fabric information, and cutting out a non-woven fabric sample with a target size according to the original non-woven fabric information, wherein the target size is 1cm x 1cm in the embodiment;
s12, collecting a multi-focal-plane sequence image of the nonwoven fabric sample by an optical microscope, as shown in fig. 2.
The optical microscope is connected with a digital camera, a stepping motor and a computer.
In the embodiment, when an image is collected by using a microscope, a sample is placed on a glass slide and covered with a cover glass, then the sample is fixed on a microscope stage and can move along the front-back direction, the left-right direction and the up-down direction, so that the microstructure of the non-woven fabric sample is displayed in the center of the microscope, the stage is adjusted by fixing the step length to move in the vertical direction, one image is shot and marked once the stage moves by one step length, so that a multi-focal-plane sequence image is obtained, the mark of each frame of image represents the depth information of the image, namely the depth information of the shot fiber, and finally the obtained image is stored based on a.
Step S2, performing data processing on the multi-focal-plane sequence image by using a depth of focus algorithm to obtain a depth map of the fiber structure, specifically including the following steps:
s21, calculating the definition of each pixel point in the multi-focal-plane sequence image;
s22, extracting the fiber structure meeting the preset definition threshold through threshold segmentation, and removing the background which cannot be focused;
s23, comparing the definition of the pixel points at the same coordinate position of each frame of image, and recording the image layer number of the pixel point with the maximum definition, the image layer numbers of the images before and after the frame of image and the definition of the corresponding pixel point;
and S24, estimating the optimal focusing position of the pixel through a Gaussian interpolation algorithm, and calculating the depth map of the fiber structure shown in the figure 3 according to the optimal focusing position of the pixel.
In step S21, the sharpness is calculated by using Sobel operator as sharpness evaluation function, and the specific formula is as follows:
Gx=f(x+1,y-1)+2*f(x+1,y)+f(x+1,y+1)-f(x-1,y-1)+2*f(x-1,y)+f(x-1,y+1)
Gy=f(x-1,y-1)+2*f(x,y-1)+f(x+1,y-1)-f(x-1,y+1)+2*f(x,y+1)+f(x+1,y+1)
Figure BDA0002968869210000061
wherein G isxAnd GyThe gradient of the pixel point in the vertical direction and the horizontal direction is respectively shown, (x, y) are coordinates of the pixel point, G is the gradient value of the pixel point, and the definition of the pixel point is reflected through the gradient value of the pixel point.
The calculation formula of the best focus position of the pixel in step S24 is as follows:
Figure BDA0002968869210000062
wherein,
Figure BDA0002968869210000063
m is the layer number of the pixel point corresponding to the maximum definition value, Fm-1、Fm、Fm+1The definition of the maximum value of the definition and the definition of the same coordinate position on the front image and the rear image satisfies Fm≥Fm-1,Fm-1≥Fm+1,dm、dm-1、dm+1The sequence number of the image with the maximum definition and the sequence numbers of two adjacent frames of images are respectively.
Step S3 specifically includes the following steps:
s31, processing the depth map according to a region growing algorithm to divide a plurality of single fibers;
s32, marking fiber communication domains among the single fibers according to a boundary tracking algorithm;
s33, connecting the fiber connected domains to complete the repair of the missing part caused by occlusion, wherein the repaired single fiber is shown in figure 4.
Step S4 specifically includes the following steps:
s41, extracting a fiber skeleton of the fiber structure through an iterative refinement algorithm according to the repaired single fiber;
s42, removing branches of the fiber skeleton to obtain a fiber central axis of the fiber structure on the two-dimensional image, as shown in FIG. 5;
s43, calculating the depth value of the fiber central axis in the three-dimensional space on the two-dimensional image;
s44, calculating to obtain the vertical coordinate of the fiber middle axis through a polynomial curve fitting function according to the depth value;
and S45, calculating the distance between the central axis of the fiber and the edge of the fiber as the radius of the fiber according to the ordinate of the central axis of the fiber.
In the step S41, binarizing the repaired image of the single fiber to obtain a binary image, extracting a fiber skeleton from the binary image according to an iterative refinement algorithm, deleting pixel points on the boundary of an object, and performing iteration for multiple times until the image is not changed any more, so that the image is contracted into a line with minimum connectivity, but the object is not allowed to split, wherein each iteration of the iterative refinement algorithm comprises a first sub-iteration and a second sub-iteration, and in the first sub-iteration, the pixel points are deleted only when all judgment conditions G1, G2 and G3 are met; in the second sub-iteration, pixels are deleted if and only if all of the decision conditions G1, G2, and G3 'are satisfied, the formulas for the decision conditions G1, G2, G3, and G3' are specifically as follows:
determination condition G1:
XH(p)=1
where p is a pixel point on the boundary of the object, XH(p) satisfies:
Figure BDA0002968869210000071
Figure BDA0002968869210000072
determination condition G2:
2≤min{n1(p),n2(p)}≤3
wherein n is1(p) and n2(p) satisfies:
Figure BDA0002968869210000073
Figure BDA0002968869210000074
determination condition G3:
Figure BDA0002968869210000075
determination condition G3':
Figure BDA0002968869210000081
wherein x is1、x2、...、x8Are the values of eight neighbors of p, numbered in anti-clockwise order starting from the right neighbor.
The specific formula of the polynomial curve fitting function in step S44 is as follows:
p(x)=p1xn+p2xn-1+…+pnx+p(n+1)
wherein n represents the order of the polynomial function, p1, p2 and p3 … p (n +1) are constants, the corresponding constants are solved by a least square method, and the corresponding ordinate of each pixel of the fiber central axis on the fitting curve is acquired to obtain the ordinate of the fiber central axis in the three-dimensional space.
The results of fig. 6 and 7 show that the method for reconstructing a microstructure of a nonwoven fabric according to the embodiment of the present invention can reconstruct a microstructure of a nonwoven fabric, and has the advantages that the three-dimensional model structure and the morphological distribution are highly consistent with the actual microstructure and the morphological distribution of the nonwoven fabric, and the stacking, crossing, and juxtaposition relationships between fibers are also highly consistent with the actual situation.
In addition, it should be noted that the specific embodiments described in the present specification may have different names, and the above descriptions in the present specification are only illustrations of the structures of the present invention. All equivalent or simple changes in the structure, characteristics and principles of the invention are included in the protection scope of the invention. Various modifications or additions may be made to the described embodiments or methods may be similarly employed by those skilled in the art without departing from the scope of the invention as defined in the appending claims.

Claims (10)

1. A three-dimensional reconstruction method of a microstructure of a non-woven fabric is characterized by comprising the following steps:
s1, acquiring a multi-focal-plane sequence image of the non-woven fabric sample;
s2, obtaining a depth map of the fiber structure by data processing of the multi-focal-plane sequence image;
s3, segmenting the depth map into a plurality of single fibers after data processing, and repairing missing parts caused by shielding in the single fibers;
s4, extracting a fiber central axis and a fiber edge of the fiber structure in a three-dimensional space according to the repaired single fiber, and calculating the distance between the fiber central axis and the fiber edge as the fiber radius;
and S5, drawing a spherical surface with the radius of the fiber, rolling the spherical surface along the central axis of the fiber, and enveloping the spherical surface to form a three-dimensional model of the tubular fiber.
2. The method of claim 1, wherein the step S2 of performing data processing on the multi-focal-plane sequence image by using a depth-of-focus algorithm to obtain a depth map of a fiber structure comprises the following steps:
s11, acquiring original non-woven fabric information, and cutting out a non-woven fabric sample with a target size according to the original non-woven fabric information;
and S12, acquiring a multi-focal-plane sequence image of the non-woven fabric sample through an optical microscope.
3. The method of claim 2, wherein the optical microscope is connected to a digital camera, a stepper motor and a computer.
4. The method for three-dimensional reconstruction of microstructure of nonwoven fabric according to claim 1, wherein said step S2 specifically comprises the steps of:
s21, calculating the definition of each pixel point in the multi-focal-plane sequence image;
s22, extracting the fiber structure meeting the preset definition threshold value through threshold value segmentation;
s23, comparing the definition of the pixel points at the same coordinate position of each frame of image, and recording the image layer number of the pixel point with the maximum definition, the image layer numbers of the images before and after the frame of image and the definition of the corresponding pixel point;
and S24, estimating the optimal focusing position of the pixel through a Gaussian interpolation algorithm, and calculating to obtain a depth map of the fiber structure according to the optimal focusing position of the pixel.
5. The method for three-dimensional reconstruction of microstructure of nonwoven fabric according to claim 4, wherein the sharpness is calculated by using Sobel operator as sharpness evaluation function in step S21, the specific formula is as follows:
Gx=f(x+1,y-1)+2*f(x+1,y)+f(x+1,y+1)-f(x-1,y-1)+2*f(x-1,y)+f(x-1,y+1)
Figure FDA0002968869200000021
wherein G isxAnd GyThe gradient of the pixel point in the vertical direction and the horizontal direction is respectively shown, (x, y) are coordinates of the pixel point, G is the gradient value of the pixel point, and the definition of the pixel point is reflected through the gradient value of the pixel point.
6. The method of claim 4, wherein the optimal focusing position of the pixels in step S24 is calculated as follows:
Figure FDA0002968869200000022
wherein,
Figure FDA0002968869200000023
m is the layer number of the pixel point corresponding to the maximum definition value, Fm-1、Fm、Fm+1The definition of the maximum value of the definition and the definition of the same coordinate position on the front image and the rear image satisfies Fm≥Fm-1,Fm-1≥Fm+1,dm、dm-1、dm+1The sequence number of the image with the maximum definition and the sequence numbers of two adjacent frames of images are respectively.
7. The method for three-dimensional reconstruction of microstructure of nonwoven fabric according to claim 1, wherein said step S3 specifically comprises the steps of:
s31, processing the depth map according to a region growing algorithm to divide a plurality of single fibers;
s32, marking fiber communication domains among the single fibers according to a boundary tracking algorithm;
s33, the fiber connection domain is connected to complete the repair of the missing part caused by the occlusion.
8. The method for three-dimensional reconstruction of microstructure of nonwoven fabric according to claim 1, wherein said step S4 specifically comprises the steps of:
s41, extracting a fiber skeleton of the fiber structure through an iterative refinement algorithm according to the repaired single fiber;
s42, removing the branches of the fiber framework to obtain the fiber central axis of the fiber structure on the two-dimensional image;
s43, calculating the depth value of the fiber medial axis in the three-dimensional space on the two-dimensional image;
s44, calculating to obtain the vertical coordinate of the fiber middle axis through a polynomial curve fitting function according to the depth value;
and S45, calculating the distance between the central axis of the fiber and the edge of the fiber as the radius of the fiber according to the ordinate of the central axis of the fiber.
9. The three-dimensional reconstruction method of a microstructure of a nonwoven fabric according to claim 8, wherein in step S41, the restored image of a single fiber is binarized to obtain a binary image, a fiber skeleton is extracted from the binary image according to an iterative refinement algorithm, and pixel points on the boundary of an object are deleted, each iteration of the iterative refinement algorithm includes a first sub-iteration and a second sub-iteration, and in the first sub-iteration, pixel points are deleted only when all of the determination conditions G1, G2, and G3 are satisfied; in the second sub-iteration, pixels are deleted if and only if all of the decision conditions G1, G2, and G3 'are satisfied, the formulas for the decision conditions G1, G2, G3, and G3' are specifically as follows:
determination condition G1:
XH(p)=1
where p is a pixel point on the boundary of the object, XH(p) is fullFoot:
Figure FDA0002968869200000031
Figure FDA0002968869200000032
determination condition G2:
2≤min{n1(p),n2(p)}≤3
wherein n is1(p) and n2(p) satisfies:
Figure FDA0002968869200000033
Figure FDA0002968869200000034
determination condition G3:
Figure FDA0002968869200000035
determination condition G3':
Figure FDA0002968869200000036
wherein x is1、x2、...、x8Are the values of eight neighbors of p, numbered in anti-clockwise order starting from the right neighbor.
10. The method of claim 8, wherein the polynomial curve fitting function of step S44 is expressed as follows:
p(x)=p1xn+p2xn-1+…+pnx+p(n+1)
wherein n represents the order of the polynomial function, p1, p2 and p3 … p (n +1) are constants, and the corresponding ordinate of each pixel of the fiber central axis on the fitting curve is acquired to obtain the ordinate of the fiber central axis in the three-dimensional space.
CN202110258327.9A 2021-03-10 2021-03-10 Three-dimensional reconstruction method for microstructure of non-woven fabric Pending CN112991518A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110258327.9A CN112991518A (en) 2021-03-10 2021-03-10 Three-dimensional reconstruction method for microstructure of non-woven fabric

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110258327.9A CN112991518A (en) 2021-03-10 2021-03-10 Three-dimensional reconstruction method for microstructure of non-woven fabric

Publications (1)

Publication Number Publication Date
CN112991518A true CN112991518A (en) 2021-06-18

Family

ID=76334677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110258327.9A Pending CN112991518A (en) 2021-03-10 2021-03-10 Three-dimensional reconstruction method for microstructure of non-woven fabric

Country Status (1)

Country Link
CN (1) CN112991518A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113640326A (en) * 2021-08-18 2021-11-12 华东理工大学 Multistage mapping reconstruction method for nano-pore resin-based composite material micro-nano structure
CN115049791A (en) * 2022-08-12 2022-09-13 山东鲁晟精工机械有限公司 Numerical control lathe workpiece three-dimensional modeling method combined with image processing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102937561A (en) * 2012-10-17 2013-02-20 西北工业大学 Determination method for orthogonal non-woven three-dimensional rectangular fabric permeability
CN105395215A (en) * 2015-12-30 2016-03-16 中国科学院声学研究所东海研究站 Ultrasonic imaging device and method
CN107514984A (en) * 2017-07-07 2017-12-26 南京航空航天大学 A kind of 3 d surface topography measuring method and system based on optical microphotograph
CN109345493A (en) * 2018-09-05 2019-02-15 上海工程技术大学 A kind of method of non-woven cloth multi-focal-plane image co-registration
CN112164085A (en) * 2020-09-28 2021-01-01 华南理工大学 Fiber image segmentation and diameter statistical method based on image processing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102937561A (en) * 2012-10-17 2013-02-20 西北工业大学 Determination method for orthogonal non-woven three-dimensional rectangular fabric permeability
CN105395215A (en) * 2015-12-30 2016-03-16 中国科学院声学研究所东海研究站 Ultrasonic imaging device and method
CN107514984A (en) * 2017-07-07 2017-12-26 南京航空航天大学 A kind of 3 d surface topography measuring method and system based on optical microphotograph
CN109345493A (en) * 2018-09-05 2019-02-15 上海工程技术大学 A kind of method of non-woven cloth multi-focal-plane image co-registration
CN112164085A (en) * 2020-09-28 2021-01-01 华南理工大学 Fiber image segmentation and diameter statistical method based on image processing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LINGJIE YU GUANLIN WANG,CHAO ZHI,BUGAO XU: "3D Web Reconstruction of a Fibrous Filter Using Sequential Multi-Focus Images", 《CMES-COMPUTER MODELING IN ENGINEERING&SCIENCES》, 20 May 2019 (2019-05-20), pages 365 - 372 *
LOUISA LAM, SEONG-WHAN LEE , CHING Y.SUEN: "Thinning Methodologies-A Comprehensive Survey", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE LNTELLIGENCE》, 1 September 1992 (1992-09-01), pages 869 - 882 *
周金凤: "多焦面图像融合及其在纺织品数字化检测中的应用", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》, 15 February 2019 (2019-02-15), pages 2 - 3 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113640326A (en) * 2021-08-18 2021-11-12 华东理工大学 Multistage mapping reconstruction method for nano-pore resin-based composite material micro-nano structure
CN113640326B (en) * 2021-08-18 2023-10-10 华东理工大学 Multistage mapping reconstruction method for micro-nano structure of nano-porous resin matrix composite material
CN115049791A (en) * 2022-08-12 2022-09-13 山东鲁晟精工机械有限公司 Numerical control lathe workpiece three-dimensional modeling method combined with image processing
CN115049791B (en) * 2022-08-12 2022-11-04 山东鲁晟精工机械有限公司 Numerical control lathe workpiece three-dimensional modeling method combined with image processing

Similar Documents

Publication Publication Date Title
CN110264448B (en) Insulator fault detection method based on machine vision
CN112991518A (en) Three-dimensional reconstruction method for microstructure of non-woven fabric
CN101487838B (en) Extraction method for dimension shape characteristics of profiled fiber
Li et al. An overlapping-free leaf segmentation method for plant point clouds
CN105335972B (en) Knitted fabric defect detection method based on small echo contourlet transform and vision significance
CN109509163B (en) FGF-based multi-focus image fusion method and system
He et al. 3D microstructure reconstruction of nonwoven fabrics based on depth from focus
CN112149495B (en) Video key frame extraction method based on parallax tracking
CN110288571B (en) High-speed rail contact net insulator abnormity detection method based on image processing
CN112288859B (en) Three-dimensional face modeling method based on convolutional neural network
CN105701770B (en) A kind of human face super-resolution processing method and system based on context linear model
CN102509264A (en) Image-segmentation-based scanning image dedusting method
CN109544577B (en) Improved straight line extraction method based on edge point grouping
CN108665486A (en) A kind of labeling method of strong convection monomer sample
Jiang et al. Nighttime image dehazing with modified models of color transfer and guided image filter
CN116523898A (en) Tobacco phenotype character extraction method based on three-dimensional point cloud
CN113609736A (en) Numerical calculation model construction method based on hole crack structure digital image
CN116205876A (en) Unsupervised notebook appearance defect detection method based on multi-scale standardized flow
CN117011465A (en) Tree three-dimensional reconstruction method and device, electronic equipment and storage medium
WO2023065505A1 (en) Image preprocessing method and system for in-situ plankton observation
CN117808703A (en) Multi-scale large-scale component assembly gap point cloud filtering method
CN113763300A (en) Multi-focus image fusion method combining depth context and convolution condition random field
CN105069766A (en) Inscription restoration method based on contour feature description of Chinese character image
CN116452604B (en) Complex substation scene segmentation method, device and storage medium
CN112734933B (en) Method for reducing three-dimensional structure of non-woven material through central axis of fiber

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210618