CN111709992B - Method for positioning crossing position of warp and weft yarns for weaving ceramic matrix composite material - Google Patents

Method for positioning crossing position of warp and weft yarns for weaving ceramic matrix composite material Download PDF

Info

Publication number
CN111709992B
CN111709992B CN202010540033.0A CN202010540033A CN111709992B CN 111709992 B CN111709992 B CN 111709992B CN 202010540033 A CN202010540033 A CN 202010540033A CN 111709992 B CN111709992 B CN 111709992B
Authority
CN
China
Prior art keywords
warp
lines
crossing
weft yarns
intersection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010540033.0A
Other languages
Chinese (zh)
Other versions
CN111709992A (en
Inventor
贾蕴发
高希光
宋迎东
于国强
方光武
张盛
董洪年
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202010540033.0A priority Critical patent/CN111709992B/en
Publication of CN111709992A publication Critical patent/CN111709992A/en
Application granted granted Critical
Publication of CN111709992B publication Critical patent/CN111709992B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/04Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
    • G01N23/046Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material using tomography, e.g. computed tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Abstract

A method for positioning the crossing position of warp and weft yarns in the woven ceramic-base composite material includes such steps as using histogram of direction gradient to combine with support vector machine to position the crossing position of warp and weft yarns in the whole XCT slice, and positioning each crossing region for several times. Then, lines of corresponding angles of warp yarns and weft yarns in the sub-picture are obtained by Hough transform, a plurality of vertexes of the crossing positions of the warp yarns and the weft yarns are obtained according to the straight line intersection principle, and four vertexes of the crossing regions of the warp yarns and the weft yarns are extracted from the vertexes according to the characteristics of the crossing positions. The method for positioning the crossing positions of the warps and the wefts provided by the invention can further improve the identification accuracy of the crossing positions of the warps and the wefts, can be used for CMCs continuous XCT slices in plain weave, 2.5D weave and other types, and has the advantages of wide application range and simple operation.

Description

Method for positioning crossing position of warp and weft yarns for weaving ceramic matrix composite material
Technical Field
The invention belongs to the field of mesoscopic structure identification of a woven ceramic matrix composite, and particularly relates to a method for positioning a warp and weft intersection of the woven ceramic matrix composite.
Background
Ceramic Matrix Composites (CMCs) are composed of continuous reinforced fiber bundles, interfaces and a Matrix, have the characteristics of low density, better temperature resistance and corrosion resistance compared with metal Matrix Composites, and are ideal candidate materials for high-temperature hot end components of aero-engines. Woven CMCs with complex preform structures have received much attention due to their excellent interlaminar shear resistance, impact resistance, and fracture toughness.
X-ray computed tomography (XCT) is a nondestructive testing technology, has the characteristic of not damaging an object but clearly observing the internal structure of the material, and can obtain the spatial distribution of different component phases by scanning the interior of a plain weave structural material by the XCT. Due to the fact that the directions of the continuous reinforced fiber bundles are different, the interior of the material can be divided into horizontal warp yarns and vertical weft yarns. It has been shown that fatigue damage starts at the crossing of warp and weft yarns and is also a typical location of fiber-to-fiber wear when the material is subjected to fatigue loading; during single drawing of the material, most of the warp yarns break at the warp and weft yarn crossings. Therefore, it is important to accurately locate the intersections of the warp and weft yarns during the identification process. However, at present, the identification of the woven CMCs XCT slices is generally to input the whole slice and classify the microscopic structure, and the phenomenon of inaccurate identification of the cross positions of warps and wefts exists.
In summary, no report has been found on a method for positioning the crossing position of XCT cut warp and weft yarns for weaving CMCs. Therefore, it is necessary to conduct research on the determination of the crossing positions of the warp and weft yarns so as to improve the accuracy of the mesoscopic structure identification.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a method for positioning the crossing position of warp and weft yarns of a woven ceramic matrix composite. Firstly, positioning a region containing a longitude and latitude yarn crossing position of an XCT slice by using a direction gradient histogram in combination with a support vector machine, screening out a region with the highest accuracy by overlapping area ratio and classification accuracy, and outputting a plurality of sub-pictures containing longitude and latitude yarn crossing positions. And then, obtaining lines corresponding to the warp yarns and the weft yarns in the sub-picture by using Hough transform, and obtaining a plurality of vertexes of the crossing positions of the warp yarns and the weft yarns according to a straight line intersection principle. And four vertexes of the crossing of the warp and weft yarns are extracted from the vertexes according to the characteristics of the crossing positions, so that the method can further improve the identification accuracy of the crossing positions of the warp and weft yarns.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for positioning the crossing of warp and weft yarns of a woven ceramic matrix composite material is characterized by comprising the following steps:
step 1: fitting parameters of a support vector machine by combining a direction gradient histogram algorithm to obtain a weight file meeting the requirement of classification accuracy;
step 2: using a weight file to position the area of the whole XCT slice, which contains the crossing positions of the warps and the wefts, screening out the area with higher accuracy through the overlapping area ratio and the classification accuracy, and outputting a plurality of sub-pictures containing the crossing positions of the warps and the wefts; obtaining lines corresponding to warp and weft yarns in the sub-picture by using Hough transformation, obtaining a plurality of vertexes of the crossing positions of the warp and weft yarns according to a straight line crossing principle, and extracting four vertexes of crossing of the warp and weft yarns from the vertexes according to the characteristics of the crossing positions.
In order to optimize the technical scheme, the specific measures adopted further comprise:
further, step 1 specifically includes the following steps:
step 1.1: constructing a plain weave data set for training a support vector machine algorithm, and distinguishing whether a picture contains a crossing position of warps and wefts through a naming format of the picture in the data set;
step 1.2: extracting a training set and a test set from the data set according to a random distribution principle;
step 1.3: the direction gradient histogram algorithm adopts a function HOGDescriptor in Opencv, sets an image window, a block size and a cell size, and divides the angle into N parts;
step 1.4: sequentially zooming the pictures in the training set to make the size of the pictures the same as that of an image window, and extracting a gradient histogram of each picture by adopting a directional gradient histogram;
step 1.5: fitting parameters of the support vector machine by using the training set processed in the step four, and storing a training weight file; and performing class prediction on the test set by using the weight file, calculating the classification accuracy, if the accuracy is higher than a threshold value Accu _ Threash, using the weight file, and otherwise, further expanding the data set until the weight file meeting the accuracy requirement is obtained.
Further, in step 1.1, the pictures in the data set are named in a format of "T _ cla. jpg"; wherein, T is a positive integer representing the number of pictures, and is sequentially added from 0, Cla represents the number of picture categories, and takes a value of 0 or 1, 1 belongs to the pictures at the intersection, and 0 is the picture at the non-intersection.
Further, the step 2 specifically comprises the following steps:
step 2.1: establishing a sliding window for analyzing each small area of the whole XCT slice;
step 2.2: zooming the image contained in the sliding window to make the size of the image identical to the size of the window of the HOG, calculating the HOG descriptor, importing the weight file obtained through training, and predicting whether the image in the sliding window belongs to a longitude and latitude yarn crossing area to be searched; after the sliding window traverses the whole XCT slice, finishing prediction and deriving coordinates of all predicted intersection areas;
step 2.3: respectively calculating the overlapping area ratio between adjacent sliding windows through the derived coordinates of the crossed areas, and screening out areas with higher accuracy through the overlapping area ratio and classification accuracy; then cutting out a plurality of sub-pictures containing the crossing positions of the warps and the wefts according to the coordinates of the crossing areas extracted after screening;
step 2.4: setting a pore threshold, and setting pixels of the sub-picture according to the pore threshold;
step 2.5: extracting the edge of the sub-picture by using a Canny operator;
step 2.6: determining the corresponding angles of the warp yarn and the weft yarn sections, and establishing an M4 angle interval; extracting lines of each angle interval by using Hough transformation, drawing an analysis graph by taking the angles of the lines as horizontal coordinates and the intercept as vertical coordinates, and removing, combining and deleting the lines according to the analysis graph to obtain lines meeting conditions;
step 2.7: separating the warp and weft direction lines;
step 2.8: according to the straight line intersection principle, a plurality of vertexes of the crossing positions of the warps and the wefts are obtained, and four vertexes of the crossing areas of the warps and the wefts are extracted from the vertexes according to the characteristics of the crossing positions.
Further, in step 2.1, the size of the sliding window is M times of the size of the picture in the original data set, where M is greater than 1 and less than 2.
Further, in step 2.3, the Area Ratio Area _ Overlap between adjacent sliding windows is calculated by the derived coordinates of the intersection region, a threshold Area _ Thresh is set, and the Area _ Ratio is calculated according to the following formula:
Figure BDA0002536491930000031
where [ Sliding _ X, Sliding _ Y ] represents the size of the Sliding window;
when the Area _ Ratio is greater than the threshold Area _ Thresh, the accuracy of the two identifications is further compared, leaving an Area of higher accuracy.
Further, in step 2.4, all pixels smaller than the aperture threshold are changed to 0, and pixels larger than the aperture threshold are changed to Fiber _ Value, which represents the Fiber gray Value.
Further, in step 2.6, from each analysis graph, the top M5 lines corresponding to the most intense angles and intercepts are extracted, the lines with the distance less than the distance threshold Dis _ Thresh are merged, the lines with the length less than the length threshold Len _ Thresh are deleted, and finally the lines meeting the conditions are obtained.
Further, in step 2.7, the Slope of the line meeting the condition is obtained, and if the absolute value of the Slope is greater than the Slope threshold Slope _ Thresh, the yarn belongs to the weft yarn, otherwise, the yarn belongs to the warp yarn.
Further, step 2.8 is specifically as follows:
1) obtaining lines on the upper side and the lower side of the warp: comparing the horizontal coordinates of any two points of the lines corresponding to the warps with the horizontal coordinate of the central point according to the vertical central point coordinate of the sub-picture, if the horizontal coordinate of any two points of the lines is smaller than the horizontal coordinate of the central point, the lines belong to the upper side, and if the horizontal coordinate of any two points of the lines is not smaller than the central point, the lines belong to the lower side;
2) obtaining lines on the left side and the right side of the weft: comparing the vertical coordinates of any two points of the lines corresponding to the weft yarns with the vertical coordinate of the central point according to the horizontal central point coordinate of the sub-picture, if the vertical coordinates are smaller than the vertical coordinates, the lines belong to the left side, otherwise, the lines belong to the right side;
3) and (3) solving the coordinates of the left lower vertex at the intersection: extracting the angle and intercept of the left side line of the weft yarn and the angle and intercept of the lower side line of the warp yarn, respectively solving the intersection points according to the straight line intersection principle, and extracting the maximum value of the abscissa and the minimum value of the ordinate of the intersection points;
4) and (3) solving coordinates of a top left vertex at the intersection: extracting the angle and intercept of the left side line of the weft yarn and the angle and intercept of the upper side line of the warp yarn, respectively solving the intersection points according to the straight line intersection principle, and extracting the maximum value of the abscissa and the maximum value of the ordinate of the intersection points;
5) and (3) solving coordinates of a top right vertex at the intersection: extracting the angle and intercept of the right side line of the weft yarn and the angle and intercept of the upper side line of the warp yarn, respectively solving the intersection points according to the straight line intersection principle, and extracting the abscissa minimum value and the ordinate maximum value of the intersection points;
6) and (3) solving the vertex coordinate at the lower right of the intersection: extracting the angle and intercept of the weft yarn line on the right side and the angle and intercept of the side line on the lower side of the warp yarn, respectively solving the intersection points according to the straight line intersection principle, and extracting the abscissa minimum value and the ordinate minimum value of the intersection points;
7) through the above method, the vertex coordinates of the crossing areas of the warp and weft yarns are all obtained.
The invention has the beneficial effects that: the method for positioning the crossing positions of the warps and the wefts provided by the invention can further improve the identification accuracy of the crossing positions of the warps and the wefts, can be used for CMCs continuous XCT slices in plain weave, 2.5D weave and other types, and has the advantages of wide application range and simple operation.
Drawings
FIG. 1 is a data set image and name format comparison.
Fig. 2a is a practical application picture extraction result corresponding to the first part.
Fig. 2b is a result of refining the overlapping area ratio after extracting the actually applied picture corresponding to the first part.
Fig. 3a is a sub-picture selected by the second part.
Fig. 3b shows the edge of the second extracted sub-picture.
Fig. 4a shows the Hough transform extraction with a first angle range [ -5: 5] intercept and angle analysis plot.
Fig. 4b shows the Hough transform extraction with a second angle range [ -90: -85] intercept and angle analysis plot.
Fig. 4c shows the Hough transform extraction of the third angle interval [ 85: 89.5] intercept and angle analysis plot.
FIG. 5a is the coordinates of the apex of the crossing area of the left lower side warp and weft yarns, indicated by the dots.
FIG. 5b is the coordinates of the apex of the cross section of the upper left warp and weft yarns, shown as dots.
Fig. 6 is the four vertex coordinates of the warp and weft yarn crossing regions, indicated by the dots.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings.
A first part:
step one, constructing a plain weave data set for training an algorithm of a support vector machine, wherein the number of the plain weave data set is M, and the Size of the plain weave data set is [ Size _ X, Size-Y ]. Pictures in the dataset are each named in "T _ cla. jpg" format. As shown in fig. 1, T is a positive integer representing the number of pictures, and is sequentially incremented from 0; cla is a number indicating a picture category, and takes a value of 0 or 1, 1 is a picture belonging to a cross, and 0 is a picture not crossing. In the example, the total number of pictures in the data set is 200, and the size is [150, 120 ].
Step two, extracting a training set and a test set from the data set according to a random distribution principle, wherein the proportion is 4: 1, 160 and 40 sheets respectively.
And step three, adopting a function HOGDescriptor in Opencv by using a directional gradient histogram algorithm, setting an image window as [ Winsize, Winsize ], setting a block size as [ Blocksize, Blocksize ], setting a cell size as [ Cellsize, Cellsize ], dividing the angles into N parts, and setting the rest according to internal default values. In an embodiment, the image window is set to (88, 88), the block size is set to (44, 44), the cell size is (22, 22), and the angle is divided into 9.
And step four, sequentially scaling the training pictures in the data set to enable the sizes of the training pictures to be the same as (88, 88) of the [ Winsize, Winsize ], and extracting the gradient histogram of each picture by adopting a directional gradient histogram.
And step five, fitting parameters of the support vector machine by using the training data set, and storing a training weight file. And performing class prediction on the test set by using the file, calculating the classification accuracy, and if the accuracy is higher than a threshold value Accu _ Threash (taking 0.94), performing actual application by using the weight file, otherwise, further expanding the data set. The practical picture extraction results are shown in fig. 2a and 2 b.
A second part:
and step six, constructing a sliding window. In an actual application scenario, the XCT slice is applied to an XCT slice including a plurality of intersections or non-intersections, and therefore a sliding window needs to be established so that each small region of the picture can be analyzed in the actual application. The size of the Sliding window is slightly larger than the picture in the original data set, and is M (1 < M < 2), the size of the Sliding window is [ Sliding _ X, Sliding _ Y ], the Step size of the Sliding window is Step _ Sliding, the coordinate of the upper left corner of the window is [ X, Y ], and the coordinate of the lower right corner is [ X + Sliding _ X, Y + Sliding _ Y ]. In an embodiment, the size of the sliding window is [220, 220], the step size of the sliding window is 20, the coordinates of the upper left corner of the window are [0, 0], and the coordinates of the lower right corner are [220, 220 ].
And seventhly, zooming the image contained in the sliding window to enable the size of the image to be the same as the window size [ Winsize, Winsize ] (88, 88) of the HOG, calculating the HOG descriptor, and importing the HOG descriptor into the weight file obtained through training to predict whether the image in the sliding window belongs to the cross area of the longitude and latitude yarns to be searched. And after the sliding window traverses the whole XCT slice, finishing prediction and deriving all predicted coordinates containing intersections.
Step eight, because the step size of the adjacent sliding window is not enough to completely cross the region, the phenomenon of repeated prediction exists. The Area ratio Area _ Overlap of the intersection areas is calculated from the derived coordinates of the intersection areas, and a threshold Area _ Thresh is set. When the Area _ Ratio is larger than the threshold, the accuracy of the identification of the Area _ Ratio and the identification of the Area _ Ratio are further compared, and an Area with higher accuracy is left, so that the classification result of the whole picture is obtained. Cutting out a plurality of sub-pictures containing the crossing positions of the warp and weft yarns according to the extracted position coordinates of the crossing areas, as shown in figure 3 a;
Figure BDA0002536491930000061
and step nine, setting a pore threshold value. And (3) counting the pixels of the pore area of the sub-picture, randomly taking M3 pixels with pixel values smaller than Hole _ Thresh, determining whether all the pixels are pores according to manual comparison, setting the Value as a pore threshold, changing all the pixels smaller than the Value into 0, and changing the pixels larger than the Value into Fiber _ Value. In the embodiment, 200 pixels with pixel values smaller than 20 are randomly selected, all the pixels are determined to be pores according to manual comparison, 20 is set as a pore threshold, all the pixels smaller than the value are changed to be 0, and the pixels larger than the value are changed to be 128.
Step ten, extracting the edge of the sub-picture by using a Canny operator, as shown in fig. 3 b.
Step eleven, according to manual analysis, determining the approximately corresponding angles of the warp yarn and the weft yarn sections, and establishing an M4 angle interval. And extracting lines of each angle interval by using Hough transformation, and drawing an analysis graph by taking the angles of the lines as horizontal coordinates and the intercept as vertical coordinates. And extracting the first M5 Lines corresponding to the most strong response angles and intercepts from each analysis graph, combining the Lines with the distance less than Dis _ Thresh, and deleting the Lines with the length less than Len _ Thresh to obtain the number Num _ Lines of the Lines meeting the conditions. In the example, three angle intervals [ (-5: 5), (-90: -85), (85: 89.5) ] are established, as shown in fig. 4a, 4b, 4 c. And extracting lines corresponding to the most intense angles and intercepts of the first 4 responses from each analysis graph, merging the lines with the distance less than 1000, and deleting the lines with the length less than 100 to obtain the number of qualified lines 10.
And a twelfth step of separating the lines in the warp and weft directions. And (4) calculating the Slope of the obtained line, if the absolute value of the Slope is greater than Slope _ Thresh (taking 6), the obtained line belongs to the weft yarn, and if not, the obtained line belongs to the warp yarn.
And step thirteen, obtaining lines on the upper side and the lower side of the warp. And comparing the horizontal coordinates of any two points of the lines corresponding to the warps with the horizontal coordinate of the central point according to the vertical central point coordinate of the sub-picture, if the horizontal coordinates are smaller than the central point horizontal coordinate, the horizontal coordinates belong to the upper side, otherwise, the horizontal coordinates belong to the lower side, and the upper side and the lower side of the warps are determined to be 2 and 4.
Fourteen, obtaining lines on the left side and the right side of the weft. And comparing the vertical coordinates of any two points of the lines corresponding to the weft yarns with the vertical coordinate of the central point according to the horizontal central point coordinate of the sub-picture, if the vertical coordinates are smaller than the horizontal central point coordinate, the vertical coordinates belong to the left side, otherwise, the vertical coordinates belong to the right side, and the left side and the right side of the weft yarns are determined to be two, and 2 weft yarns are determined to be on the right side.
And fifthly, solving the coordinates of the top point at the lower left of the intersection. As shown in fig. 5a, the angles and intercepts of the left side lines of two weft yarns and the angles and intercepts of the lower side lines of four warp yarns are extracted. According to the straight line intersection principle, the eight intersection points are respectively obtained, and the maximum value of the abscissa and the minimum value of the ordinate of the eight intersection points are extracted.
Sixthly, solving the coordinates of the top left vertex at the intersection. The angles and intercepts of the left side lines of the two weft yarns and the angles and intercepts of the upper side lines of the two warp yarns are extracted as shown in fig. 5 b. According to the straight line intersection principle, the four intersection points are respectively obtained, and the maximum value of the abscissa and the maximum value of the ordinate of the four intersection points are extracted.
Seventhly, solving coordinates of the top right point at the intersection. The angles and intercepts to the right of the two weft yarns and the angles and intercepts of the lines on the upper side of the two warp yarns are extracted. According to the straight line intersection principle, the four intersection points are respectively obtained, and the abscissa minimum value and the ordinate maximum value of the four intersection points are extracted.
Eighteen, solving the vertex coordinate at the lower right of the intersection. The angles and intercepts to the right of the two weft yarns and the angles and intercepts of the lines below the four warp yarns are extracted. According to the straight line intersection principle, the eight intersection points are respectively obtained, and the abscissa minimum value and the ordinate minimum value of the four intersection points are extracted.
Nineteenth, through the above method, the vertex coordinates of the crossing of the warp and weft yarns can be all solved, as shown in fig. 6.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (9)

1. A method for positioning the crossing of warp and weft yarns of a woven ceramic matrix composite material is characterized by comprising the following steps:
step 1: fitting parameters of a support vector machine by combining a direction gradient histogram algorithm to obtain a weight file meeting the requirement of classification accuracy;
step 2: using a weight file to position the area of the whole XCT slice, which contains the crossing positions of the warps and the wefts, screening out the area with higher accuracy through the overlapping area ratio and the classification accuracy, and outputting a plurality of sub-pictures containing the crossing positions of the warps and the wefts; obtaining lines corresponding to warp and weft yarns in the sub-picture by using Hough transformation, obtaining a plurality of vertexes of crossing positions of the warp and weft yarns according to a straight line crossing principle, and extracting four vertexes of crossing of the warp and weft yarns from the vertexes according to the characteristics of the crossing positions; the step 2 specifically comprises the following steps:
step 2.1: establishing a sliding window for analyzing each small area of the whole XCT slice;
step 2.2: zooming the image contained in the sliding window to make the size of the image identical to the size of the window of the HOG, calculating the HOG descriptor, importing the weight file obtained through training, and predicting whether the image in the sliding window belongs to a longitude and latitude yarn crossing area to be searched; after the sliding window traverses the whole XCT slice, finishing prediction and deriving coordinates of all predicted intersection areas;
step 2.3: respectively calculating the overlapping area ratio between adjacent sliding windows through the derived coordinates of the crossed areas, and screening out areas with higher accuracy through the overlapping area ratio and classification accuracy; then cutting out a plurality of sub-pictures containing the crossing positions of the warps and the wefts according to the coordinates of the crossing areas extracted after screening; in step 2.3, respectively calculating the overlapping Area Ratio Area _ Overlap between adjacent sliding windows through the derived coordinates of the intersection areas, setting a threshold Area _ Thresh, and further comparing the recognition accuracy of the Area _ Ratio and the threshold Area _ Thresh when the Area _ Ratio is greater than the threshold Area _ Thresh, and leaving an Area with higher accuracy;
step 2.4: setting a pore threshold, and setting pixels of the sub-picture according to the pore threshold;
step 2.5: extracting the edge of the sub-picture by using a Canny operator;
step 2.6: determining the corresponding angles of the warp yarn and the weft yarn sections, and establishing an M4 angle interval; extracting lines of each angle interval by using Hough transformation, drawing an analysis graph by taking the angles of the lines as horizontal coordinates and the intercept as vertical coordinates, and removing, combining and deleting the lines according to the analysis graph to obtain lines meeting conditions;
step 2.7: separating the warp and weft direction lines;
step 2.8: according to the straight line intersection principle, a plurality of vertexes of the crossing positions of the warps and the wefts are obtained, and four vertexes of the crossing areas of the warps and the wefts are extracted from the vertexes according to the characteristics of the crossing positions.
2. The method of claim 1 for locating the intersections of warp and weft yarns in a woven ceramic matrix composite material, wherein: the step 1 specifically comprises the following steps:
step 1.1: constructing a plain weave data set for training a support vector machine algorithm, and distinguishing whether a picture contains a crossing position of warps and wefts through a naming format of the picture in the data set;
step 1.2: extracting a training set and a test set from the data set according to a random distribution principle;
step 1.3: the direction gradient histogram algorithm adopts a function HOGDescriptor in Opencv, sets an image window, a block size and a cell size, and divides the angle into N parts;
step 1.4: sequentially zooming the pictures in the training set to make the size of the pictures the same as that of an image window, and extracting a gradient histogram of each picture by adopting a directional gradient histogram;
step 1.5: fitting parameters of a support vector machine by using the training set processed in the step 1.4, and storing a training weight file; and performing class prediction on the test set by using the weight file, calculating the classification accuracy, if the accuracy is higher than a threshold value Accu _ Threash, using the weight file, and otherwise, further expanding the data set until the weight file meeting the accuracy requirement is obtained.
3. The method of claim 2 for locating the intersections of warp and weft yarns in a woven ceramic matrix composite material, wherein: in step 1.1, the pictures in the data set are named in a format of 'T _ Cla.jpg'; wherein, T is a positive integer representing the number of pictures, and is sequentially added from 0, Cla represents the number of picture categories, and takes a value of 0 or 1, 1 belongs to the pictures at the intersection, and 0 is the picture at the non-intersection.
4. The method of claim 1 for locating the intersections of warp and weft yarns in a woven ceramic matrix composite material, wherein: in step 2.1, the size of the sliding window is M times of the size of the picture in the original data set, wherein M is more than 1 and less than 2.
5. The method of claim 1 for locating the intersections of warp and weft yarns in a woven ceramic matrix composite material, wherein: in step 2.3, Area _ Ratio is calculated according to the following formula:
Figure FDA0003207757590000021
in the formula, [ Sliding _ X, Sliding _ Y ] indicates the size of the Sliding window.
6. The method of claim 1 for locating the intersections of warp and weft yarns in a woven ceramic matrix composite material, wherein: in step 2.4, all pixels smaller than the aperture threshold are changed to 0, and pixels larger than the aperture threshold are changed to Fiber _ Value, which represents the Fiber gray Value.
7. The method of claim 1 for locating the intersections of warp and weft yarns in a woven ceramic matrix composite material, wherein: in step 2.6, from each analysis graph, the first M5 lines corresponding to the most strongly responded angles and intercepts are extracted, the lines with the distance less than the distance threshold Dis _ Thresh are merged, the lines with the length less than the length threshold Len _ Thresh are deleted, and finally the lines meeting the conditions are obtained.
8. The method of claim 1 for locating the intersections of warp and weft yarns in a woven ceramic matrix composite material, wherein: in step 2.7, the Slope of the line meeting the condition is calculated, if the absolute value of the Slope is greater than the Slope threshold Slope _ Thresh, the yarn belongs to the weft yarn, otherwise, the yarn belongs to the warp yarn.
9. The method of claim 1 for locating the intersections of warp and weft yarns in a woven ceramic matrix composite material, wherein: step 2.8 is specifically as follows:
1) obtaining lines on the upper side and the lower side of the warp: comparing the horizontal coordinates of any two points of the lines corresponding to the warps with the horizontal coordinate of the central point according to the vertical central point coordinate of the sub-picture, if the horizontal coordinate of any two points of the lines is smaller than the horizontal coordinate of the central point, the lines belong to the upper side, and if the horizontal coordinate of any two points of the lines is not smaller than the central point, the lines belong to the lower side;
2) obtaining lines on the left side and the right side of the weft: comparing the vertical coordinates of any two points of the lines corresponding to the weft yarns with the vertical coordinate of the central point according to the horizontal central point coordinate of the sub-picture, if the vertical coordinates are smaller than the vertical coordinates, the lines belong to the left side, otherwise, the lines belong to the right side;
3) and (3) solving the coordinates of the left lower vertex at the intersection: extracting the angle and intercept of the left side line of the weft yarn and the angle and intercept of the lower side line of the warp yarn, respectively solving the intersection points according to the straight line intersection principle, and extracting the maximum value of the abscissa and the minimum value of the ordinate of the intersection points;
4) and (3) solving coordinates of a top left vertex at the intersection: extracting the angle and intercept of the left side line of the weft yarn and the angle and intercept of the upper side line of the warp yarn, respectively solving the intersection points according to the straight line intersection principle, and extracting the maximum value of the abscissa and the maximum value of the ordinate of the intersection points;
5) and (3) solving coordinates of a top right vertex at the intersection: extracting the angle and intercept of the right side line of the weft yarn and the angle and intercept of the upper side line of the warp yarn, respectively solving the intersection points according to the straight line intersection principle, and extracting the abscissa minimum value and the ordinate maximum value of the intersection points;
6) and (3) solving the vertex coordinate at the lower right of the intersection: extracting the angle and intercept of the weft yarn line on the right side and the angle and intercept of the side line on the lower side of the warp yarn, respectively solving the intersection points according to the straight line intersection principle, and extracting the abscissa minimum value and the ordinate minimum value of the intersection points;
7) through the above method, the vertex coordinates of the crossing areas of the warp and weft yarns are all obtained.
CN202010540033.0A 2020-06-12 2020-06-12 Method for positioning crossing position of warp and weft yarns for weaving ceramic matrix composite material Active CN111709992B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010540033.0A CN111709992B (en) 2020-06-12 2020-06-12 Method for positioning crossing position of warp and weft yarns for weaving ceramic matrix composite material

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010540033.0A CN111709992B (en) 2020-06-12 2020-06-12 Method for positioning crossing position of warp and weft yarns for weaving ceramic matrix composite material

Publications (2)

Publication Number Publication Date
CN111709992A CN111709992A (en) 2020-09-25
CN111709992B true CN111709992B (en) 2021-09-28

Family

ID=72540220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010540033.0A Active CN111709992B (en) 2020-06-12 2020-06-12 Method for positioning crossing position of warp and weft yarns for weaving ceramic matrix composite material

Country Status (1)

Country Link
CN (1) CN111709992B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112250462B (en) * 2020-10-30 2021-06-29 南京航空航天大学 Method for segmenting single-bundle fiber of woven ceramic matrix composite and reconstructing model
CN115797268B (en) * 2022-11-14 2023-10-13 南京玻璃纤维研究设计院有限公司 Preform vision detection device and method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160376691A1 (en) * 2015-05-27 2016-12-29 University Of Virginia Patent Foundation Multilayered thermal and environmental barrier coating (ebc) for high temperature applications and method thereof
CN109902755B (en) * 2019-03-05 2019-10-11 南京航空航天大学 A kind of multi-layer information sharing and correcting method for XCT slice
CN109919080B (en) * 2019-03-05 2019-10-11 南京航空航天大学 More full convolutional neural networks of decoder and its corresponding microscopical structure recognition methods
CN110348429B (en) * 2019-08-02 2020-06-16 南京航空航天大学 Method for identifying warps and wefts of plain weave structure

Also Published As

Publication number Publication date
CN111709992A (en) 2020-09-25

Similar Documents

Publication Publication Date Title
US10540760B2 (en) Method of separating, identifying and characterizing cracks in 3D space
CN108510467B (en) SAR image target identification method based on depth deformable convolution neural network
CN106202728B (en) Based on the non-homogeneous Voxel grid discrete method of Micro-CT D braided composites
CN111709992B (en) Method for positioning crossing position of warp and weft yarns for weaving ceramic matrix composite material
CN109214420A (en) The high texture image classification method and system of view-based access control model conspicuousness detection
Barraud The use of watershed segmentation and GIS software for textural analysis of thin sections
CN106683076A (en) Texture feature clustering-based locomotive wheelset tread damage detection method
CN109146933A (en) Multiple dimensioned digital cores modeling method and computer readable storage medium
CN112819809B (en) Method for quantifying morphology of mineral particles in rock
CN112964712A (en) Method for rapidly detecting state of asphalt pavement
CN109902755B (en) A kind of multi-layer information sharing and correcting method for XCT slice
CN103530644A (en) Automatic identification method for weave structures of striped fabric
CN109685821A (en) Region growing 3D rock mass point cloud plane extracting method based on high quality voxel
CN115311284B (en) Production quality detection method for glass fiber mat
JP6681221B2 (en) Structure analysis device, structure analysis method, and three-dimensional woven fiber material manufacturing method
KR20120000732A (en) An automatic segmentation method for object-based analysis using high resolution satellite imagery
CN114332086B (en) Textile defect detection method and system based on style migration and artificial intelligence
CN112580542A (en) Steel bar counting method based on target detection
CN113029899B (en) Sandstone permeability calculation method based on microscopic image processing
CN105957027B (en) A kind of MRF sample block image repair methods based on required direction structure characteristic statistics
CN113570652B (en) Sandstone reservoir mineral intercrystalline pore quantitative analysis method based on SEM image
CN113570651B (en) Carbonate reservoir pore radius distribution quantification method based on SEM image
CN111353247B (en) Method for identifying and reconstructing mesoscopic components of plain weave structure of ceramic matrix composite
CN110264555B (en) Micro-CT-based three-dimensional five-direction woven composite material statistical mesoscopic model establishing method
CN111709939B (en) Structural symmetry woven ceramic matrix composite microscopic component classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant