CN115082710A - Intelligent fabric mesh classifying and identifying method and system - Google Patents

Intelligent fabric mesh classifying and identifying method and system Download PDF

Info

Publication number
CN115082710A
CN115082710A CN202210989599.0A CN202210989599A CN115082710A CN 115082710 A CN115082710 A CN 115082710A CN 202210989599 A CN202210989599 A CN 202210989599A CN 115082710 A CN115082710 A CN 115082710A
Authority
CN
China
Prior art keywords
mesh
target
meshes
target mesh
transverse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210989599.0A
Other languages
Chinese (zh)
Inventor
陈序锋
王芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Poly Gold Textile Technology Co ltd
Original Assignee
Nantong Poly Gold Textile Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Poly Gold Textile Technology Co ltd filed Critical Nantong Poly Gold Textile Technology Co ltd
Priority to CN202210989599.0A priority Critical patent/CN115082710A/en
Publication of CN115082710A publication Critical patent/CN115082710A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of data processing, in particular to a method and a system for intelligently classifying and identifying fabric meshes. The method is a method for identifying images, can be applied to the development of application software such as computer vision software and the like, and comprises the following steps: acquiring a target mesh in a mesh fabric image to be detected; calculating the length-width ratio and the perimeter corresponding to the target mesh; processing the image corresponding to the target mesh by utilizing a gray projection method to obtain a transverse projection curve graph and a longitudinal projection curve graph corresponding to the target mesh so as to calculate a transverse-longitudinal distance ratio, a curve fluctuation degree and a curve similarity degree corresponding to the target mesh; and obtaining the category corresponding to the target mesh according to the length-width ratio, the perimeter, the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree corresponding to the target mesh. The system can be applied to information system integration services such as an artificial intelligence system and the like in the production field, and can be applied to an artificial intelligence optimization operation system. The invention improves the accuracy of classifying the fabric meshes.

Description

Intelligent fabric mesh classifying and identifying method and system
Technical Field
The invention relates to the technical field of data processing, in particular to a fabric mesh intelligent classification and identification method and system.
Background
The textile industry is one of the dominant industries in china; due to the further improvement of the living standard of people, the demand of high-quality products is increasing day by day, and the quality detection of the products in the corresponding textile industry is also further improved; for mesh fabrics, the defects generated in the production process of the mesh fabrics are various in types, mainly have the defects of transverse warp-breaking meshes, longitudinal warp-breaking meshes, dense meshes, mesh adhesion and the like, and the use value and the economic value of the mesh fabrics are influenced by any defect; the current method for detecting the quality of meshes in textile factories mainly depends on manual detection and classification.
Because different kinds of meshes have different characteristics and the meshes of the textile are small, the meshes are difficult to be accurately identified and classified by naked eyes; due to the limitation of the working environment in the factory, the subjectivity is high through a manual identification method, and long-time detection work can cause visual fatigue of personnel, so that the identification result is inaccurate, the accuracy rate of mesh classification is low, and the detection of the quality of textiles is influenced finally.
Disclosure of Invention
In order to solve the problem of low accuracy in classifying fabric meshes based on a manual mode in the prior art, the invention aims to provide an intelligent classifying and identifying method for fabric meshes, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a fabric mesh intelligent classification and identification method, including the following steps:
acquiring a target mesh in a mesh fabric image to be detected;
calculating the ratio of the length to the width of the minimum circumscribed rectangle of the connected domain corresponding to the target mesh as the length-width ratio corresponding to the target mesh; calculating the length of the edge of the target mesh to obtain the corresponding perimeter of the target mesh;
processing the image corresponding to the target mesh by utilizing a gray projection method to obtain a transverse projection curve graph and a longitudinal projection curve graph corresponding to the target mesh; calculating the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree corresponding to the target meshes according to the transverse projection curve graph and the longitudinal projection curve graph corresponding to the target meshes; the curve similarity degree is the similarity degree of a transverse projection curve graph and a longitudinal projection curve graph corresponding to the meshes;
and obtaining the category corresponding to the target mesh according to the length-width ratio, the perimeter, the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree corresponding to the target mesh.
In a second aspect, another embodiment of the present invention provides a fabric mesh intelligent classification and identification system, which includes a memory and a processor, wherein the processor executes a computer program stored in the memory to implement the above fabric mesh intelligent classification and identification method.
Preferably, calculating the aspect ratio corresponding to the target mesh according to the transverse projection graph and the longitudinal projection graph corresponding to the target mesh comprises:
selecting the maximum value in the transverse projection curve graph corresponding to the target mesh, and recording as the maximum transverse distance;
selecting the minimum value in the longitudinal projection curve graph corresponding to the target mesh, and recording the minimum value as the minimum longitudinal distance;
and calculating the ratio of the maximum transverse distance and the minimum longitudinal distance corresponding to the target meshes as the transverse-longitudinal distance ratio corresponding to the target meshes.
Preferably, the formula for calculating the curve fluctuation degree corresponding to the target mesh is as follows:
Figure 495659DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 905781DEST_PATH_IMAGE002
the degree of curve fluctuation corresponding to the target mesh,
Figure 173951DEST_PATH_IMAGE003
is the total number of lines of the binary image corresponding to the target mesh,
Figure 103861DEST_PATH_IMAGE004
is the value corresponding to the abscissa i in the transverse projection graph corresponding to the target mesh,
Figure 64864DEST_PATH_IMAGE005
is the average of the values corresponding to each abscissa in the transverse projection graph corresponding to the target mesh,
Figure 729325DEST_PATH_IMAGE006
is the total number of columns of the binary image corresponding to the target mesh,
Figure 535607DEST_PATH_IMAGE007
is the value corresponding to the abscissa j in the longitudinal projection graph corresponding to the target mesh,
Figure 585603DEST_PATH_IMAGE008
is the mean value of the values corresponding to each abscissa in the longitudinal projection curve chart corresponding to the target mesh.
Preferably, the formula for calculating the curve similarity degree corresponding to the target mesh is as follows:
Figure 842141DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure 102221DEST_PATH_IMAGE010
for the degree of similarity of the curves corresponding to the target mesh,
Figure 853139DEST_PATH_IMAGE011
the DTW distance between the transverse projection graph and the longitudinal projection graph corresponding to the target mesh.
Preferably, calculating the length of the edge of the target mesh to obtain the corresponding circumference of the target mesh comprises:
acquiring the corresponding edge of the target mesh by using an edge detection algorithm;
numbering edge pixel points corresponding to the target meshes in sequence, wherein the edge pixel points are pixel points on the edges;
calculating the Euclidean distance between two edge pixel points with adjacent numbers in each edge pixel point, and recording the Euclidean distance as a first sub-distance; calculating the Euclidean distance between the first edge pixel point and the last edge pixel point, and recording as a second sub-distance;
and calculating the sum of each first sub-distance and each second sub-distance as the perimeter of the target mesh.
Preferably, the obtaining of the category corresponding to the target mesh according to the aspect ratio, the perimeter, the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree corresponding to the target mesh includes:
calculating the characteristic description indexes corresponding to the target meshes according to the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree corresponding to the target meshes;
processing a transverse projection curve graph and a longitudinal projection curve graph corresponding to the target mesh by using a differential identification algorithm, and counting the total number of peaks and troughs in the transverse projection curve graph and the longitudinal projection curve graph;
inputting the length-width ratio, the perimeter, the characteristic description indexes and the total number of wave crests and wave troughs corresponding to the target meshes into the trained mesh classification network to obtain the category corresponding to the target meshes.
Preferably, the formula for calculating the characteristic description index corresponding to the target mesh is as follows:
Figure 882275DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure 76758DEST_PATH_IMAGE013
the corresponding feature description index for the target mesh,
Figure 433921DEST_PATH_IMAGE014
is the corresponding transverse-longitudinal distance ratio of the target meshes,
Figure 847585DEST_PATH_IMAGE015
the degree of curve fluctuation corresponding to the target mesh,
Figure 855861DEST_PATH_IMAGE016
for the degree of similarity of the curves corresponding to the target mesh,
Figure 595147DEST_PATH_IMAGE017
as a result of the first adjustment parameter,
Figure 439606DEST_PATH_IMAGE018
in order to be able to set the second adjustment parameter,
Figure 391382DEST_PATH_IMAGE019
is the third adjustment parameter.
Preferably, the process of training the mesh classification network comprises:
acquiring images corresponding to a plurality of sample meshes of different types;
obtaining the length-width ratio, the perimeter, the feature description index and the total number of wave crests and wave troughs corresponding to each sample mesh according to the image corresponding to each sample mesh;
labeling the category of each sample mesh, and training the mesh classification network by taking the length-width ratio, the perimeter, the feature description indexes and the total number of wave crests and wave troughs corresponding to each sample mesh as the input of the mesh classification network to obtain the trained mesh classification network; the loss function for training the mesh classification network is a cross entropy loss function.
The invention has the following beneficial effects:
the method comprises the following steps of firstly obtaining a target mesh in a mesh fabric image to be detected, and then extracting the characteristics of the target mesh to obtain the category of the target mesh, specifically: calculating the length-width ratio and the perimeter corresponding to the target mesh; processing the connected domain corresponding to the target mesh by utilizing a gray projection method to obtain a transverse projection curve graph and a longitudinal projection curve graph corresponding to the target mesh, calculating a transverse-longitudinal distance ratio, a curve fluctuation degree and a curve similarity degree corresponding to the target mesh according to the transverse projection curve graph and the longitudinal projection curve graph corresponding to the target mesh, and finally obtaining the category corresponding to the target mesh according to the length-width ratio, the perimeter, the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree corresponding to the target mesh. The system can be applied to information system integration services such as an artificial intelligence system and the like in the production field, and can be applied to an artificial intelligence optimization operation system; the method is a method for identifying the image, and can be applied to the development of application software such as computer vision software and the like. The invention is based on a computer vision method, automatically extracts the characteristic information of each mesh in the mesh fabric image, and then intelligently classifies each mesh according to the characteristic information of each mesh, thereby overcoming the problem of strong subjectivity based on a manual detection method and improving the accuracy of classifying the fabric meshes.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a fabric mesh intelligent classification and identification method according to an embodiment of the present invention;
FIG. 2 is a schematic view of a normal mesh;
FIG. 3 is a schematic illustration of a dense mesh;
FIG. 4 is a schematic view of a coherent mesh;
FIG. 5 is a schematic view of a longitudinal warp-breaking mesh;
FIG. 6 is a schematic view of a transverse warp-cutting mesh.
Detailed Description
To further illustrate the technical means and functional effects of the present invention adopted to achieve the predetermined purpose, the following describes a fabric mesh intelligent classification and identification method and system according to the present invention in detail with reference to the accompanying drawings and preferred embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of the intelligent fabric mesh classifying and identifying method and system provided by the invention in detail by combining with the accompanying drawings.
The embodiment of the intelligent classification and identification method of the fabric meshes comprises the following steps:
as shown in fig. 1, the intelligent classification and identification method for fabric meshes of the present embodiment includes the following steps:
and step S1, acquiring target meshes in the mesh fabric image to be detected.
In the embodiment, firstly, a camera is used for acquiring an image of a mesh fabric to be detected, and the image is recorded as the image of the mesh fabric to be detected; in order to reduce noise in the image, the embodiment performs denoising processing on the mesh fabric image to be detected so as to eliminate the influence of the noise on subsequent image segmentation; in the embodiment, gaussian filtering is adopted to perform denoising processing on an image, and technologies such as mean filtering, median filtering and the like can also be used as other implementation modes; the process of denoising the image in this embodiment is the prior art, and is not described herein again.
The mesh fabric for which this embodiment is directed is such that, in the absence of defects, the size of the individual meshes in the fabric is the same.
In order to classify each mesh of the mesh fabric image to be detected, the embodiment preprocesses the mesh fabric image to be detected after denoising, and divides each mesh in the image to obtain an image corresponding to each mesh; the embodiment performs feature extraction on the image corresponding to each mesh, and then judges the corresponding category according to the feature information corresponding to each mesh.
In this embodiment, taking any mesh in an image as an example, feature information of the mesh is extracted to obtain a category corresponding to the mesh; this embodiment designates this mesh as the target mesh.
In this embodiment, an OTSU algorithm is first used to perform threshold segmentation on an image corresponding to a target mesh, a mesh area is set as a foreground, a weaving line portion is set as a background, and a binary image corresponding to the target mesh is obtained to highlight the outline and structure of the target mesh. In this embodiment, the process of performing threshold segmentation on an image by using an OTSU algorithm is the prior art, and is not described herein again.
In this embodiment, connected component analysis is performed on a binary image corresponding to a target mesh, in order to obtain a complete mesh region, a 3 × 3 window is adopted in this embodiment, 8 pixel points around a central point of the window are set as connected components, and then a Seed Filling operator is adopted to merge foreground pixel points adjacent to Seed pixel points into the same pixel set, so as to obtain a connected component corresponding to the target mesh; the connected domain analysis in this embodiment is the prior art, and will not be described herein.
Step S2, calculating the ratio of the length to the width of the minimum circumscribed rectangle of the connected domain corresponding to the target mesh as the length-width ratio corresponding to the target mesh; and calculating the length of the edge of the target mesh to obtain the corresponding perimeter of the target mesh.
This example divides the mesh into 5 categories, respectively: normal mesh, as shown in FIG. 2; dense mesh, as shown in fig. 3; adhesive mesh, as shown in fig. 4; longitudinal warp-breaking meshes, as shown in fig. 5; transverse warp-breaking meshes, as illustrated in fig. 6; wherein the dense meshes, the adhesion meshes, the longitudinal warp-breaking meshes and the transverse warp-breaking meshes are all meshes with defects.
The embodiment analyzes according to the characteristics of different types of meshes, and the aspect ratios of the minimum circumscribed rectangles are different in consideration of certain differences between the shapes of normal meshes and abnormal meshes; the method comprises the steps of obtaining a minimum circumscribed rectangle of a connected domain corresponding to a target mesh, obtaining length and width values corresponding to the minimum circumscribed rectangle, and taking the ratio of the length to the width of the minimum circumscribed rectangle as the length-width ratio corresponding to the target mesh; the present embodiment can roughly distinguish the types of the meshes according to the aspect ratios corresponding to the meshes, for example, the aspect ratio of the normal mesh is fixed, and the aspect ratio of the abnormal mesh is greatly different from the aspect ratio of the normal mesh due to the problems of sticking, warp breaking and the like.
Considering that the perimeters of the corresponding edges of meshes of different kinds are also different, for example, when a mesh has a defect, the corresponding perimeter is different from the perimeter of a normal mesh; in the embodiment, a Canny edge detection algorithm is utilized to process a binary image corresponding to a target mesh so as to extract an edge corresponding to the target mesh; in this embodiment, the perimeter of the edge of the target mesh, that is, the perimeter corresponding to the target mesh, is calculated according to the coordinates of each edge pixel point (that is, each pixel point on the edge) corresponding to the target mesh. The Canny edge detection algorithm in this embodiment is prior art, and will not be described herein.
Because the edges corresponding to the target meshes extracted by the edge detection may not be complete, in order to improve the accuracy of the obtained perimeters of the target meshes, the embodiment numbers each edge pixel point corresponding to the target meshes in sequence (that is, the positions of the numbered adjacent edge pixel points are also adjacent); calculating the Euclidean distance between two adjacent edge pixel points (namely two adjacent edge pixel points with serial numbers) in each edge pixel point, and recording the Euclidean distance as a first sub-distance; calculating the Euclidean distance between the first edge pixel point and the last edge pixel point, and recording as a second sub-distance; then, the sum of each first sub-distance and each second sub-distance is calculated as the perimeter of the target mesh.
Step S3, processing the image corresponding to the target mesh by utilizing a gray projection method to obtain a transverse projection curve graph and a longitudinal projection curve graph corresponding to the target mesh; calculating the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree corresponding to the target meshes according to the transverse projection curve graph and the longitudinal projection curve graph corresponding to the target meshes; the curve similarity degree is the similarity degree of the transverse projection curve chart and the longitudinal projection curve chart corresponding to the meshes.
In this embodiment, a gray projection method is used to process a binary image corresponding to a target mesh to obtain a transverse projection graph and a longitudinal projection graph corresponding to the target mesh, and then feature information for determining a mesh category is extracted by using the transverse projection graph and the longitudinal projection graph corresponding to the target mesh, specifically:
in this embodiment, firstly, the gray value of each pixel point in the binary image corresponding to the target mesh is normalized, then the sum of the gray values of each row in the normalized binary image is calculated, and then a transverse projection curve graph corresponding to the target mesh is obtained through projection, wherein the abscissa in the transverse projection curve graph is the row number, and the ordinate is the gray value, namely the sum of the gray values of each pixel point corresponding to each row after normalization; and calculating the sum of gray values of each row in the normalized binary image to obtain a longitudinal projection curve graph corresponding to the target mesh, wherein the abscissa in the longitudinal projection curve graph is the row number, and the ordinate is the gray value, namely the sum of the gray values of each pixel point corresponding to each row after normalization. The gray level projection method in this embodiment is the prior art, and will not be described herein again.
In the embodiment, a maximum value is selected from a transverse projection curve graph corresponding to a target mesh and is recorded as a maximum transverse distance, namely, a row with the maximum sum of gray values in a normalized binary image, namely, the maximum value of the width of a target mesh region in the horizontal direction; selecting a minimum value from a longitudinal projection curve graph corresponding to the target mesh, and recording the minimum value as a minimum longitudinal distance, namely a column with the minimum sum of gray values in the normalized binary image, namely the minimum value of the length of the target mesh region in the vertical direction; in the embodiment, the ratio of the maximum transverse distance and the minimum longitudinal distance corresponding to the target meshes is used as the transverse-longitudinal distance ratio corresponding to the target meshes.
In the embodiment, the transverse-longitudinal distance ratio can be roughly distinguished from different mesh categories, such as dense meshes and adhered meshes, the minimum longitudinal distance is far smaller than the maximum transverse distance, and the difference between the minimum longitudinal distance and the maximum transverse distance corresponding to the longitudinal warp-breaking meshes is smaller, so that the transverse-longitudinal distance ratio corresponding to the longitudinal warp-breaking meshes is smaller than that corresponding to the dense meshes and the adhered meshes.
In the embodiment, considering that the transverse projection curve chart and the longitudinal projection curve chart corresponding to different meshes are different, taking the mesh categories shown in fig. 2, fig. 3, fig. 4, fig. 5 and fig. 6 as examples, the transverse projection curve chart and the longitudinal projection curve chart corresponding to a normal mesh are approximately a stable horizontal straight line, that is, the fluctuation is small; the transverse projection curve graph corresponding to the dense meshes is mostly a relatively stable curve, and the corresponding longitudinal projection curve graph is a curve with large fluctuation and is provided with a plurality of wave crests and wave troughs; the fluctuation of the curve in the transverse projection curve graph corresponding to the adhesive meshes is relatively small, and the corresponding longitudinal projection curve graph is a curve which is severely fluctuated; the transverse projection curve graph corresponding to the longitudinal broken warp mesh is approximately the same as the longitudinal projection curve graph and is a stable straight line; the transverse projection curve chart and the longitudinal projection curve chart corresponding to the transverse warp-breaking meshes are similar to the change of the transverse projection curve chart and the longitudinal projection curve chart corresponding to the normal meshes, but the length of the transverse warp-breaking meshes is longer.
According to the analysis, the curve fluctuation degree of the transverse projection curve chart and the longitudinal projection curve chart corresponding to the target mesh is calculated and used as a characteristic for distinguishing mesh categories; in this embodiment, the calculation formula for calculating the curve fluctuation degree corresponding to the target mesh is as follows:
Figure 286788DEST_PATH_IMAGE020
wherein the content of the first and second substances,
Figure 806762DEST_PATH_IMAGE021
the curve fluctuation degree corresponding to the target mesh;
Figure 528730DEST_PATH_IMAGE003
the total line number of the binary image corresponding to the target mesh;
Figure 408831DEST_PATH_IMAGE004
the value corresponding to the horizontal projection curve graph corresponding to the target mesh when the horizontal coordinate is i, namely the sum of gray values of all pixel points corresponding to the ith row in the normalized binary image corresponding to the target mesh;
Figure 1486DEST_PATH_IMAGE005
the mean value of the values corresponding to all the abscissas in the transverse projection curve graph corresponding to the target mesh is obtained;
Figure 957941DEST_PATH_IMAGE006
the total column number of the binary image corresponding to the target mesh;
Figure 527725DEST_PATH_IMAGE007
is the value corresponding to the abscissa j in the longitudinal projection graph corresponding to the target mesh,the sum of gray values of all pixel points corresponding to the jth column in the normalized binary image corresponding to the target mesh is obtained;
Figure 86882DEST_PATH_IMAGE022
is the mean value of the values corresponding to each abscissa in the longitudinal projection curve chart corresponding to the target mesh.
When the fluctuation degree of the curve corresponding to the target mesh is larger, the fluctuation of the curve in the transverse projection curve graph and the longitudinal projection curve graph corresponding to the target mesh is larger; then, the embodiment is combined with the similarity between the transverse projection curve graph and the longitudinal projection curve graph corresponding to the target mesh and recorded as the curve similarity, so as to better distinguish the mesh types; in this embodiment, the calculation formula for calculating the curve similarity corresponding to the target mesh is as follows:
Figure 940568DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure 661400DEST_PATH_IMAGE010
for the degree of similarity of the curves corresponding to the target mesh,
Figure 482594DEST_PATH_IMAGE011
the DTW distance between the transverse projection graph and the longitudinal projection graph corresponding to the target mesh. The DTW distance can measure the similarity between a transverse projection curve graph and a longitudinal projection curve graph; if it is
Figure 455229DEST_PATH_IMAGE011
The larger the size, the more dissimilar the transverse projection profile and the longitudinal projection profile corresponding to the target mesh, the larger the size
Figure 288056DEST_PATH_IMAGE010
The smaller; if it is
Figure 71467DEST_PATH_IMAGE011
The smaller, the transverse projection corresponding to the target mesh is illustratedThe more similar the graph and the longitudinal projection graph are, the
Figure 989744DEST_PATH_IMAGE010
The larger. The process of calculating the DTW distance in this embodiment is the prior art, and is not described herein again.
According to the method, the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree corresponding to the target meshes are obtained according to the process, then the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree corresponding to the target meshes are fused in the method, and the feature description indexes corresponding to the target meshes are obtained, wherein the feature description indexes are mainly used for distinguishing irregular meshes with complex structures. In this embodiment, the formula for calculating the feature description index corresponding to the target mesh is as follows:
Figure 766070DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure 187824DEST_PATH_IMAGE013
the corresponding characterization indicator for the target cell,
Figure 906250DEST_PATH_IMAGE014
is the corresponding transverse-longitudinal distance ratio of the target meshes,
Figure 921611DEST_PATH_IMAGE023
the degree of curve fluctuation corresponding to the target mesh,
Figure 626262DEST_PATH_IMAGE016
for the degree of similarity of the curves corresponding to the target mesh,
Figure 794200DEST_PATH_IMAGE024
as a result of the first adjustment parameter,
Figure 965418DEST_PATH_IMAGE018
in order to be able to set the second adjustment parameter,
Figure 982922DEST_PATH_IMAGE019
is a third adjustment parameter; in this embodiment
Figure 225684DEST_PATH_IMAGE024
Figure 762976DEST_PATH_IMAGE018
And
Figure 432992DEST_PATH_IMAGE019
the device is used for adjusting the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree, so that the value ranges of the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree are uniform;
Figure 161958DEST_PATH_IMAGE024
Figure 83778DEST_PATH_IMAGE018
and
Figure 600210DEST_PATH_IMAGE019
the value of (a) needs to be set according to the actual situation. From the above formula, if C is larger, the corresponding mesh is more irregular and the structure is more complicated.
And step S4, obtaining the category corresponding to the target mesh according to the length-width ratio, the perimeter, the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree corresponding to the target mesh.
Considering that the numbers of peaks and valleys of the transverse projection curve graph and the longitudinal projection curve graph corresponding to different types of meshes are different; in the embodiment, the mean value of the values corresponding to the abscissa in the transverse projection curve graph is taken as a boundary, and the transverse projection curve graph corresponding to the target mesh is processed by using a differential recognition algorithm to count the number of peaks and troughs in the transverse projection curve graph corresponding to the target mesh; processing the longitudinal projection curve graph corresponding to the target mesh by using a differential recognition algorithm by taking the mean value of the values corresponding to the abscissas in the longitudinal projection curve graph as a boundary, and counting the number of wave crests and wave troughs in the longitudinal projection curve graph corresponding to the target mesh; according to the number of the wave crests and the wave troughs in the transverse projection curve graph and the longitudinal projection curve graph corresponding to the target meshes, the total number of the wave crests and the wave troughs corresponding to the target meshes is counted. In this embodiment, the process of obtaining the number of peaks and troughs in the image by using the difference identification algorithm is the prior art, and is not described herein again.
The implementation obtains a plurality of characteristic information corresponding to the target meshes according to the above steps, namely, the length-width ratio, the perimeter, the characteristic description index and the total number of wave crests and wave troughs corresponding to the target meshes; the characteristic information reflected by different types of meshes is different, for normal meshes, the shape of the normal meshes is more regular, usually in the shape of a rounded rectangle, and a plurality of corresponding characteristic information is standard values; for dense meshes, which appear as a cluster of multiple meshes, the aspect ratio for dense meshes is small, while the perimeter, characterization index, and total number of peaks and valleys are large; for the adhesive meshes, the adhesion phenomenon occurs in the top or bottom areas of the adjacent meshes, the length-width ratio corresponding to the adhesive meshes is small, the perimeter, the characteristic description index and the total number of wave crests and wave troughs are large, the method is similar to the dense meshes, but the values of the characteristic information corresponding to the two meshes are different; for the longitudinal cracked ends meshes, the length-width ratio and the characteristic description indexes corresponding to the longitudinal cracked ends meshes are smaller, and the perimeter is larger; for transverse warp-breaking meshes, the corresponding circumferences and the length-width ratio of the transverse warp-breaking meshes are larger.
The embodiment acquires the category of the target mesh according to the aspect ratio, the perimeter, the characteristic description index and the total number of peaks and troughs corresponding to the target mesh. This implementation has constructed a mesh classification network, mesh classification network adopts full connection neural network for distinguish the classification of mesh according to the characteristic information pair of mesh, and the process of training mesh classification network of this embodiment is:
in this embodiment, first, images corresponding to a plurality of different types of sample meshes are obtained, and according to the method described in the above steps in this embodiment, an aspect ratio, a perimeter, a feature description index, and a total number of peaks and troughs corresponding to each sample mesh are obtained; then, labeling each sample mesh, wherein the labels in the embodiment are divided into five types, namely normal meshes, dense meshes, adhesion meshes, longitudinal warp-breaking meshes and transverse warp-breaking meshes, the normal meshes are marked as 00, the dense meshes are marked as 01, the adhesion meshes are marked as 02, the longitudinal warp-breaking meshes are marked as 03, and the transverse warp-breaking meshes are marked as 04; the labeling tool used in this embodiment is Labelme, which can be modified specifically according to actual needs.
In this embodiment, the mesh classification network is trained according to the label data, the aspect ratio, the perimeter, the feature description index and the total number of peaks and troughs corresponding to each sample mesh, and the input of the mesh classification network is the aspect ratio, the perimeter, the feature description index and the total number of peaks and troughs corresponding to the mesh. The loss function used in training the mesh classification network in this embodiment is a cross entropy loss function. The process of training the fully-connected neural network in this embodiment is the prior art, and is not described herein again.
According to the embodiment, after the trained mesh classification network is obtained, the length-width ratio, the perimeter, the feature description index and the total number of peaks and troughs corresponding to the target mesh are input into the trained mesh classification network, and then the category of the target mesh is obtained. In this embodiment, according to an image corresponding to each mesh in a mesh fabric image to be detected, an aspect ratio, a circumference, a feature description index and the total number of peaks and troughs corresponding to each mesh are obtained, and then the aspect ratio, the circumference, the feature description index and the total number of peaks and troughs corresponding to each mesh are sequentially input to a trained mesh classification network, so that a category corresponding to each mesh in the mesh fabric image to be detected can be obtained.
In this embodiment, first, a target mesh in a mesh fabric image to be detected is obtained, and then, features of the target mesh are extracted to obtain a category of the target mesh, specifically: calculating the length-width ratio and the perimeter corresponding to the target mesh; processing the connected domain corresponding to the target mesh by utilizing a gray projection method to obtain a transverse projection curve graph and a longitudinal projection curve graph corresponding to the target mesh, calculating a transverse-longitudinal distance ratio, a curve fluctuation degree and a curve similarity degree corresponding to the target mesh according to the transverse projection curve graph and the longitudinal projection curve graph corresponding to the target mesh, and finally obtaining the category corresponding to the target mesh according to the length-width ratio, the perimeter, the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree corresponding to the target mesh. The system in the embodiment can be applied to information system integration services such as an artificial intelligence system and the like in the production field, and can be applied to an artificial intelligence optimization operation system; the method is a method for identifying the image, and can be applied to the development of application software such as computer vision and hearing software. The embodiment is based on a computer vision method, the characteristic information of each mesh in the mesh fabric is automatically extracted from the mesh fabric image, and then each mesh is intelligently classified according to the characteristic information of each mesh, so that the problem of strong subjectivity based on a manual detection method is solved, and the accuracy of classifying the fabric meshes is improved.
An embodiment of a fabric mesh intelligent classification and identification system:
the intelligent fabric mesh classifying and identifying system of the embodiment comprises a memory and a processor, wherein the processor executes a computer program stored in the memory to realize the intelligent fabric mesh classifying and identifying method of the embodiment of the intelligent fabric mesh classifying and identifying method.
Since the fabric mesh intelligent classification and identification method has been described in the fabric mesh intelligent classification and identification method embodiment, the fabric mesh intelligent classification and identification method is not described in detail in this embodiment.
It should be noted that: the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. An intelligent classification and identification method for fabric meshes is characterized by comprising the following steps:
acquiring a target mesh in a mesh fabric image to be detected;
calculating the ratio of the length to the width of the minimum circumscribed rectangle of the connected domain corresponding to the target mesh as the length-width ratio corresponding to the target mesh; calculating the length of the edge of the target mesh to obtain the corresponding perimeter of the target mesh;
processing the image corresponding to the target mesh by utilizing a gray projection method to obtain a transverse projection curve graph and a longitudinal projection curve graph corresponding to the target mesh; calculating the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree corresponding to the target meshes according to the transverse projection curve graph and the longitudinal projection curve graph corresponding to the target meshes; the curve similarity degree is the similarity degree of a transverse projection curve graph and a longitudinal projection curve graph corresponding to the meshes;
and obtaining the category corresponding to the target mesh according to the length-width ratio, the perimeter, the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree corresponding to the target mesh.
2. The intelligent classification and identification method for fabric meshes according to claim 1, wherein calculating the aspect ratio corresponding to a target mesh according to the transverse projection curve chart and the longitudinal projection curve chart corresponding to the target mesh comprises:
selecting the maximum value in the transverse projection curve graph corresponding to the target mesh and recording as the maximum transverse distance;
selecting the minimum value in the longitudinal projection curve graph corresponding to the target mesh, and recording the minimum value as the minimum longitudinal distance;
and calculating the ratio of the maximum transverse distance and the minimum longitudinal distance corresponding to the target meshes as the transverse-longitudinal distance ratio corresponding to the target meshes.
3. The intelligent classification and identification method for fabric meshes according to claim 1, wherein the formula for calculating the curve fluctuation degree corresponding to the target mesh is as follows:
Figure 784680DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE003
the degree of curve fluctuation corresponding to the target mesh,
Figure 983711DEST_PATH_IMAGE004
is the total number of lines of the binary image corresponding to the target mesh,
Figure DEST_PATH_IMAGE005
is the value corresponding to the abscissa i in the transverse projection graph corresponding to the target mesh,
Figure 920705DEST_PATH_IMAGE006
is the average of the values corresponding to each abscissa in the transverse projection graph corresponding to the target mesh,
Figure DEST_PATH_IMAGE007
is the total number of columns of the binary image corresponding to the target mesh,
Figure 519177DEST_PATH_IMAGE008
is the value corresponding to the abscissa j in the longitudinal projection graph corresponding to the target mesh,
Figure DEST_PATH_IMAGE009
is the mean value of the values corresponding to each abscissa in the longitudinal projection curve chart corresponding to the target mesh.
4. The intelligent classification and identification method for fabric meshes according to claim 1, wherein the formula for calculating the degree of similarity of curves corresponding to target meshes is as follows:
Figure DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 192604DEST_PATH_IMAGE012
to the eyesThe degree of similarity of the curves corresponding to the mesh openings is marked,
Figure DEST_PATH_IMAGE013
the DTW distance between the transverse projection graph and the longitudinal projection graph corresponding to the target mesh.
5. The intelligent fabric mesh classifying and identifying method according to claim 1, wherein calculating the length of the edge of the target mesh to obtain the corresponding circumference of the target mesh comprises:
acquiring the corresponding edge of the target mesh by using an edge detection algorithm;
numbering edge pixel points corresponding to the target meshes in sequence, wherein the edge pixel points are pixel points on the edges;
calculating the Euclidean distance between two edge pixel points with adjacent serial numbers in each edge pixel point, and recording the Euclidean distance as a first sub-distance; calculating the Euclidean distance between the first edge pixel point and the last edge pixel point, and recording as a second sub-distance;
and calculating the sum of each first sub-distance and each second sub-distance as the perimeter of the target mesh.
6. The intelligent classification and identification method for fabric meshes according to claim 1, wherein the obtaining of the category corresponding to the target mesh according to the aspect ratio, the perimeter, the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree corresponding to the target mesh comprises:
calculating the characteristic description indexes corresponding to the target meshes according to the transverse-longitudinal distance ratio, the curve fluctuation degree and the curve similarity degree corresponding to the target meshes;
processing a transverse projection curve graph and a longitudinal projection curve graph corresponding to the target mesh by using a differential identification algorithm, and counting the total number of peaks and troughs in the transverse projection curve graph and the longitudinal projection curve graph;
inputting the length-width ratio, the perimeter, the characteristic description indexes and the total number of wave crests and wave troughs corresponding to the target meshes into the trained mesh classification network to obtain the category corresponding to the target meshes.
7. The intelligent classification and identification method for fabric meshes according to claim 6, wherein the formula for calculating the feature description index corresponding to the target mesh is:
Figure DEST_PATH_IMAGE015
wherein the content of the first and second substances,
Figure 641165DEST_PATH_IMAGE016
the corresponding feature description index for the target mesh,
Figure DEST_PATH_IMAGE017
is the corresponding transverse-longitudinal distance ratio of the target meshes,
Figure 501673DEST_PATH_IMAGE018
the degree of curve fluctuation corresponding to the target mesh,
Figure DEST_PATH_IMAGE019
for the degree of similarity of the curves corresponding to the target mesh,
Figure 654568DEST_PATH_IMAGE020
as a result of the first adjustment parameter,
Figure DEST_PATH_IMAGE021
in order to be able to set the second adjustment parameter,
Figure 526709DEST_PATH_IMAGE022
is the third adjustment parameter.
8. The intelligent classification and identification method of fabric meshes according to claim 6, wherein the process of training the mesh classification network comprises:
acquiring images corresponding to a plurality of sample meshes of different types;
obtaining the length-width ratio, the perimeter, the feature description index and the total number of wave crests and wave troughs corresponding to each sample mesh according to the image corresponding to each sample mesh;
labeling the category of each sample mesh, and training the mesh classification network by taking the length-width ratio, the perimeter, the feature description indexes and the total number of wave crests and wave troughs corresponding to each sample mesh as the input of the mesh classification network to obtain the trained mesh classification network; the loss function of the training mesh classification network is a cross entropy loss function.
9. An intelligent classification and identification system for fabric meshes, comprising a memory and a processor, wherein the processor executes a computer program stored in the memory to implement the intelligent classification and identification method for fabric meshes according to any one of claims 1 to 8.
CN202210989599.0A 2022-08-18 2022-08-18 Intelligent fabric mesh classifying and identifying method and system Withdrawn CN115082710A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210989599.0A CN115082710A (en) 2022-08-18 2022-08-18 Intelligent fabric mesh classifying and identifying method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210989599.0A CN115082710A (en) 2022-08-18 2022-08-18 Intelligent fabric mesh classifying and identifying method and system

Publications (1)

Publication Number Publication Date
CN115082710A true CN115082710A (en) 2022-09-20

Family

ID=83244134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210989599.0A Withdrawn CN115082710A (en) 2022-08-18 2022-08-18 Intelligent fabric mesh classifying and identifying method and system

Country Status (1)

Country Link
CN (1) CN115082710A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150423A (en) * 2020-09-16 2020-12-29 江南大学 Longitude and latitude sparse mesh defect identification method
CN114429544A (en) * 2022-01-05 2022-05-03 山东工大中能科技有限公司 Method, system and device for detecting damage of screen of vibrating screen based on computer vision
CN114782416A (en) * 2022-06-16 2022-07-22 启东市固德防水布有限公司 Textile quality detection method and system based on image recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150423A (en) * 2020-09-16 2020-12-29 江南大学 Longitude and latitude sparse mesh defect identification method
CN114429544A (en) * 2022-01-05 2022-05-03 山东工大中能科技有限公司 Method, system and device for detecting damage of screen of vibrating screen based on computer vision
CN114782416A (en) * 2022-06-16 2022-07-22 启东市固德防水布有限公司 Textile quality detection method and system based on image recognition

Similar Documents

Publication Publication Date Title
CN114972357B (en) Roller surface defect detection method and system based on image processing
CN115082419B (en) Blow-molded luggage production defect detection method
CN109816644B (en) Bearing defect automatic detection system based on multi-angle light source image
CN114419025A (en) Fiberboard quality evaluation method based on image processing
CN111179225B (en) Test paper surface texture defect detection method based on gray gradient clustering
CN115082683A (en) Injection molding defect detection method based on image processing
CN104794491B (en) Based on the fuzzy clustering Surface Defects in Steel Plate detection method presorted
CN107437243B (en) Tire impurity detection method and device based on X-ray image
CN116611748B (en) Titanium alloy furniture production quality monitoring system
CN114723704B (en) Textile quality evaluation method based on image processing
CN114972356B (en) Plastic product surface defect detection and identification method and system
CN116645367B (en) Steel plate cutting quality detection method for high-end manufacturing
CN115311507B (en) Building board classification method based on data processing
CN114820625B (en) Automobile top block defect detection method
CN115131348B (en) Method and system for detecting textile surface defects
CN115330795B (en) Cloth burr defect detection method
CN115272305B (en) Button hole defect detection method
CN115239718B (en) Plastic product defect detection method and system based on image processing
CN115311283B (en) Glass tube drawing defect detection method and system
CN115100214A (en) Textile quality detection method based on image processing
CN114119603A (en) Image processing-based snack box short shot defect detection method
CN115294377A (en) System and method for identifying road cracks
CN117237747B (en) Hardware defect classification and identification method based on artificial intelligence
CN115841600B (en) Deep learning-based sweet potato appearance quality classification method
CN117330582A (en) Polymer PE film surface crystal point detecting system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220920

WW01 Invention patent application withdrawn after publication