CN115082741A - Waste textile classifying method based on image processing - Google Patents

Waste textile classifying method based on image processing Download PDF

Info

Publication number
CN115082741A
CN115082741A CN202210858579.XA CN202210858579A CN115082741A CN 115082741 A CN115082741 A CN 115082741A CN 202210858579 A CN202210858579 A CN 202210858579A CN 115082741 A CN115082741 A CN 115082741A
Authority
CN
China
Prior art keywords
image
color
sub
variance
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210858579.XA
Other languages
Chinese (zh)
Other versions
CN115082741B (en
Inventor
朱鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Tongzhou Zhongyi Textile Machine Co ltd
Original Assignee
Nantong Tongzhou Zhongyi Textile Machine Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Tongzhou Zhongyi Textile Machine Co ltd filed Critical Nantong Tongzhou Zhongyi Textile Machine Co ltd
Priority to CN202210858579.XA priority Critical patent/CN115082741B/en
Publication of CN115082741A publication Critical patent/CN115082741A/en
Application granted granted Critical
Publication of CN115082741B publication Critical patent/CN115082741B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a waste textile classifying method based on image processing. The method can realize the functions of an artificial intelligence optimization operation system, artificial intelligence middleware, a function library and the like, and can also be used for developing application software such as computer vision software and the like. The method comprises the steps of obtaining a plurality of subarea images of waste textiles to be classified; calculating the color texture characteristics of each subregion image and the gradient value of each pixel point so as to obtain the corresponding color influence degree; obtaining a plurality of preferred region images according to the color influence degree; acquiring a color distribution index of each preferred area image so as to obtain an optimal area image; and analyzing the feature matrix of the optimal area image to obtain the category of the waste textiles. The method can effectively reduce the influence of interference colors on the texture of the waste textiles, realizes automatic classification of the waste textiles, and has the advantages of high classification speed and high precision.

Description

Waste textile classifying method based on image processing
Technical Field
The invention relates to the technical field of image processing, in particular to a waste textile classifying method based on image processing.
Background
At present, with the continuous development of the textile industry, various textiles such as waste clothes and clothes are more and more, and still continuously increase at a very fast speed; the recycling of waste textiles is slowly becoming a growing point for the sustainable development of the textile industry. In order to better utilize precious social resources, the recycling of textiles is a great method for promoting recycling economy and reducing cost waste. When the waste textiles are recycled, different types of fabrics have different treatment technologies, so that how to sort the waste textiles in a classified manner is a key step for recycling the waste textiles again and efficiently.
At present, most of textile classification methods are manual detection or identification by detection instruments, but waste textiles often have pathogenic bacteria, the physical health of workers is at risk during manual detection, and the detection efficiency through manual detection is low; the classification method using the detection instrument can effectively improve the detection efficiency, but the detection instrument cannot classify and identify the texture of the textile, and when the surface of the waste textile is seriously polluted, the detection result may have a large error, and the type of the waste textile cannot be accurately identified.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a waste textile classifying method based on image processing, which comprises the following steps:
obtaining an initial image of waste textiles to be classified, wherein the initial image comprises a surface area of the waste textiles; dividing the initial image into a plurality of sub-regions according to a super-pixel segmentation method, wherein each sub-region corresponds to a sub-region image;
forming hue binary groups by any two adjacent pixel points in the subregion image, and obtaining the color texture characteristics corresponding to the subregion image according to the occurrence frequency of each hue binary group; obtaining a gradient value of each pixel point in the sub-region image, and obtaining the color influence degree of the sub-region image according to the gradient value and the color texture feature; the sub-region image with the color influence degree within a preset range is a preferred region image;
acquiring the variance of pixel values in each preferred region image, and obtaining the color distribution index of each preferred region image according to the variance, wherein the preferred region image corresponding to the smallest color distribution index is the optimal region image;
acquiring a characteristic matrix corresponding to the optimal area image, and calculating the confidence coefficient between the characteristic matrix and each characteristic matrix in a standard characteristic matrix library, wherein the standard characteristic matrix library comprises characteristic matrices corresponding to different types of textiles; and the characteristic matrix in the standard characteristic matrix library corresponding to the maximum confidence coefficient is an optimal matrix, and the category of the textile corresponding to the optimal matrix is the category of the waste textile.
Preferably, the step of forming a hue binary group by any two adjacent pixel points in the sub-region image includes:
obtaining a tone image corresponding to the subregion image in an HSV color space, wherein the pixel value of each pixel point in the tone image is a tone value, and a binary group is formed according to the tone values corresponding to any two adjacent pixel points in the subregion image and is a tone binary group.
Preferably, the step of obtaining the color texture features of the sub-region image according to the number of times of occurrence of each color tone binary group includes:
the color texture feature calculation method comprises the following steps:
Figure 472808DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 187210DEST_PATH_IMAGE002
is shown as
Figure 499462DEST_PATH_IMAGE003
Color texture features corresponding to the sub-region images;
Figure 16900DEST_PATH_IMAGE004
is shown as
Figure 212389DEST_PATH_IMAGE003
The number of all tonal dyads in the sub-region image;
Figure 300955DEST_PATH_IMAGE005
is shown as
Figure 198373DEST_PATH_IMAGE006
Number of occurrences of a tone doublet.
Preferably, the step of obtaining the color influence degree of the sub-region image according to the gradient value and the color texture feature includes:
the method for calculating the color influence degree comprises the following steps:
Figure 411180DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 710443DEST_PATH_IMAGE008
denotes the first
Figure 966981DEST_PATH_IMAGE003
The color influence degree corresponding to the sub-region image;
Figure 102427DEST_PATH_IMAGE009
is shown as
Figure 365262DEST_PATH_IMAGE003
In the image of the sub-region
Figure 535344DEST_PATH_IMAGE010
Gradient values corresponding to the pixel points;
Figure 697204DEST_PATH_IMAGE011
is shown as
Figure 834793DEST_PATH_IMAGE003
The number of all pixel points in the sub-region image;
Figure 373090DEST_PATH_IMAGE002
is shown as
Figure 397678DEST_PATH_IMAGE003
And color texture features corresponding to the sub-region images.
Preferably, the step of obtaining the variance of the pixel values in each of the images of the preferred region includes:
acquiring a red channel image, a green channel image and a blue channel image which respectively correspond to the three channels of RGB of each preferred region image; respectively calculating the red variance of pixel values of all pixels in the red channel image, the green variance of pixel values of all pixels in the green channel image and the blue variance of pixel values of all pixels in the blue channel image; the variance of the pixel value in each of the preferred region images includes a red variance, a green variance, and a blue variance.
Preferably, the step of obtaining the color distribution index of each preferred region image according to the variance includes:
the color distribution index calculation method comprises the following steps:
Figure 998948DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE013
a color distribution index representing a preferred region image;
Figure 764779DEST_PATH_IMAGE014
representing the corresponding red variance of the preferred region image;
Figure 100002_DEST_PATH_IMAGE015
representing the green variance corresponding to the preferred region image;
Figure 762560DEST_PATH_IMAGE016
indicating correspondence of preferred area imagesBlue variance.
Preferably, the step of obtaining the feature matrix corresponding to the optimal region image includes:
and performing multiple filtering processing on the optimal region image to obtain multiple filtering images, wherein pixel values of pixels in each filtering image form a one-dimensional characteristic vector, and constructing a matrix according to the characteristic vectors corresponding to the multiple filtering images to obtain characteristic matrices corresponding to all the optimal region images.
Preferably, the step of calculating the confidence between the feature matrix and each feature matrix in the standard feature matrix library includes:
the confidence coefficient calculation method comprises the following steps:
Figure 100002_DEST_PATH_IMAGE017
wherein, the first and the second end of the pipe are connected with each other,
Figure 638724DEST_PATH_IMAGE018
indicates the waste textiles belong to the category
Figure DEST_PATH_IMAGE019
The confidence of (2);
Figure 21949DEST_PATH_IMAGE020
is shown as
Figure 602972DEST_PATH_IMAGE019
Characteristic matrixes corresponding to the textiles of the individual categories;
Figure 100002_DEST_PATH_IMAGE021
and representing a characteristic matrix corresponding to the waste textiles.
The invention has the following beneficial effects: the embodiment of the invention can realize the functions of an artificial intelligence optimization operation system, artificial intelligence middleware, a function library and the like, and can also be used for developing application software such as computer vision software and the like. By analyzing the image color characteristics of the surfaces of the waste textiles, extracting the optimal region image comprising the self texture information of the waste textiles through calculating the color texture characteristics, the color influence degree and the color distribution indexes, and dividing the classes of the waste textiles based on the data in the optimal region image, the influence of the interference information such as dirt or color on the surfaces of the waste textiles on the self texture of the waste textiles can be effectively reduced, the automatic classification of the waste textiles is realized, the classification speed is high, and the classification accuracy is high.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for sorting waste textiles based on image processing according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the method for classifying waste textiles based on image processing, the specific implementation, structure, features and effects thereof according to the present invention, with reference to the accompanying drawings and preferred embodiments, is provided. In the following description, the different references to "one embodiment" or "another embodiment" do not necessarily refer to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The method is suitable for classifying the waste textiles, the initial images of the waste textiles are obtained, the initial images are segmented, the characteristic information corresponding to each subregion image is calculated, the optimal region image capable of representing the surface texture information of the waste textiles is selected, the corresponding characteristic matrix is obtained through the optimal region image, the obtained characteristic matrix and the characteristic matrix in the standard characteristic matrix library are subjected to confidence coefficient calculation, and the most suitable type of the textiles is obtained; the method has the advantages of high classification precision, high detection speed and the like.
The following describes a specific scheme of the waste textile classifying method based on image processing in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for sorting waste textiles based on image processing according to an embodiment of the present invention is shown, where the method includes the following steps:
step S100, obtaining an initial image of the waste textiles to be classified, wherein the initial image comprises the surface area of the waste textiles; and dividing the initial image into a plurality of sub-regions according to a super-pixel segmentation method, wherein each sub-region corresponds to one sub-region image.
Specifically, the image acquisition equipment is arranged for acquiring initial image data of the surfaces of the waste textiles to be classified, the image acquisition equipment at least comprises a camera, a light source and other devices, and the shooting range and the shooting angle of the camera are set by an implementer; in the embodiment of the invention, the camera and the light source are arranged above the waste textiles to be classified, the position of the light source does not influence the shooting range of the camera, and the camera collects the initial surface images of the waste textiles at the overlooking visual angle.
Furthermore, considering that the surface of the waste textile may include texture features, color information, interference textures and other information caused by dirt, the initial image of the waste textile is subjected to regional processing, in the embodiment of the invention, a superpixel segmentation algorithm is adopted to divide the initial image into a plurality of subregions, and the subregion image corresponding to each subregion is subjected to subsequent analysis; the super-pixel segmentation algorithm is a prior art and is not described in detail.
Step S200, forming hue binary groups by any two adjacent pixel points in the subregion image, and obtaining the color texture characteristics of the corresponding subregion image according to the occurrence frequency of each hue binary group; obtaining the gradient value of each pixel point in the subregion image, and obtaining the color influence degree of the subregion image according to the gradient value and the color texture characteristics; the subregion image having a color influence degree within a preset range is a preferred region image.
Specifically, in step S100, the initial image corresponding to the waste textile to be classified is divided into a plurality of sub-area images, and in order to facilitate analyzing the color of each sub-area image, in the embodiment of the present invention, each initial image is subjected to color space conversion to obtain a corresponding HSV image, so as to obtain an HSV image corresponding to each sub-area image.
Further, obtaining a tone image corresponding to the subregion image in the HSV color space, wherein the pixel value of each pixel point in the tone image is a tone value, and forming a binary group according to the tone values corresponding to any two adjacent pixel points in the subregion image, wherein the binary group is a tone binary group. Namely, acquiring a component image of each converted HSV image in an H channel, and marking the component image as a tone image, wherein tone values of two left and right adjacent pixel points in the tone image form a tone binary group as follows:
Figure 155176DEST_PATH_IMAGE022
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE023
is shown as
Figure 796766DEST_PATH_IMAGE010
Each pixel point and the adjacent second pixel point
Figure 2489DEST_PATH_IMAGE024
A tone binary group formed by the pixel points;
Figure DEST_PATH_IMAGE025
is shown as
Figure 742912DEST_PATH_IMAGE010
Hue value corresponding to each pixel point
Figure 423773DEST_PATH_IMAGE026
Is shown as
Figure 277460DEST_PATH_IMAGE024
The hue value corresponding to each pixel point.
By analogy, all hue binary groups in each hue image are obtained, the frequency of occurrence of each hue binary group in the hue image is counted, and the color texture characteristics of the sub-region image corresponding to the hue image are obtained as follows:
Figure 654083DEST_PATH_IMAGE001
wherein, the first and the second end of the pipe are connected with each other,
Figure 209698DEST_PATH_IMAGE002
is shown as
Figure 447913DEST_PATH_IMAGE003
Color texture features corresponding to the sub-region images;
Figure 139794DEST_PATH_IMAGE004
is shown as
Figure 690249DEST_PATH_IMAGE003
The number of all tonal dyads in the sub-region image;
Figure 218313DEST_PATH_IMAGE005
is shown as
Figure 509486DEST_PATH_IMAGE006
Number of occurrences of a tone doublet.
By analogy, the color texture feature corresponding to each sub-region image is obtained, and the larger the color texture feature value is, the more complicated the color texture information in the sub-region image is. In order to screen out images which can show better surface texture of the waste textiles in a plurality of sub-region images, a Canny operator is adopted to process each tone image in the embodiment of the invention, so that a gradient image corresponding to each tone image is obtained, and the pixel value of each pixel point in the gradient image is the gradient value of the corresponding pixel point in the tone image; when the gradient value of the pixel point is larger, the color texture feature corresponding to the pixel point is more obvious, so that the color influence degree corresponding to each sub-region image obtained by combining the color texture feature and the gradient value is as follows:
Figure 587032DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 790612DEST_PATH_IMAGE008
is shown as
Figure 320819DEST_PATH_IMAGE003
The color influence degree corresponding to the sub-region image;
Figure 166415DEST_PATH_IMAGE009
denotes the first
Figure 95538DEST_PATH_IMAGE003
Tone image inner second corresponding to each sub-area image
Figure 250445DEST_PATH_IMAGE010
Gradient values corresponding to the pixel points;
Figure 18681DEST_PATH_IMAGE011
is shown as
Figure 917236DEST_PATH_IMAGE003
The number of all pixel points in the tone image corresponding to each sub-region image;
Figure 985686DEST_PATH_IMAGE002
is shown as
Figure 311494DEST_PATH_IMAGE003
Individual subareaAnd (4) corresponding color texture features of the domain image.
By analogy, the color influence degree corresponding to each sub-region image is obtained, and when the color influence degree is larger, the influence of the color influence degree on the self texture of the waste textile is larger; therefore, in the embodiment of the invention, the sequential arrangement is performed according to the color influence degree corresponding to each sub-region image, namely, the color influence degrees are arranged from small to large, and the sub-region image with the color influence degree smaller than the preset range is selected as the preferred region image.
Preferably, in the embodiment of the present invention, the preset screening value is set to be 5, that is, the sub-region images corresponding to the first 5 color influence degrees in the ascending order are marked as the preferred region images.
Step S300, acquiring the variance of the pixel values in each preferred area image, and obtaining the color distribution index of each preferred area image according to the variance, wherein the preferred area image corresponding to the smallest color distribution index is the optimal area image.
Screening out a plurality of preferred region images based on the color influence degree by the method in the step 200, further calculating the color distribution condition in each preferred region image, and acquiring a red channel image, a green channel image and a blue channel image which respectively correspond to three channels of RGB of each preferred region image; respectively calculating the red variance of pixel values of all pixel points in the red channel image, the green variance of pixel values of all pixel points in the green channel image and the blue variance of pixel values of all pixel points in the blue channel image; the variance of the pixel values in each preferred region image includes a red variance, a green variance, and a blue variance.
Specifically, in the embodiment of the present invention, a red channel image, a green channel image, and a blue channel image, which correspond to each preferred region image in R, G, B three channels, are obtained first, and then a red variance corresponding to all pixel points in the red channel image, a green variance corresponding to all pixel points in the green channel image, and a blue variance corresponding to all pixel points in the blue channel image are calculated, respectively, and the calculation of the variances is a common mathematical technique and is not described again. And finally, obtaining a color distribution index corresponding to each optimal subregion image according to the obtained red variance, green variance and blue variance of each optimal subregion image, wherein the color distribution index is as follows:
Figure DEST_PATH_IMAGE027
wherein the content of the first and second substances,
Figure 756906DEST_PATH_IMAGE013
a color distribution index indicating the preferred region image;
Figure 209884DEST_PATH_IMAGE014
representing the red variance corresponding to the preferred region image;
Figure 382108DEST_PATH_IMAGE015
representing the green variance corresponding to the preferred region image;
Figure 629550DEST_PATH_IMAGE016
indicating the corresponding blue variance of the preferred region image.
By analogy, obtaining the color distribution indexes corresponding to each preferred region image, namely the color distribution indexes corresponding to the 5 preferred region images; the larger the color distribution index value is, the more disordered the color distribution in the preferred region image is; therefore, in the embodiment of the invention, the optimal area image corresponding to the minimum color distribution index is selected as the optimal area image, and the texture information of the waste textile is extracted based on the optimal area image, so as to ensure the accuracy of the subsequent identification of the category of the waste textile.
S400, acquiring a feature matrix corresponding to the optimal area image, and calculating confidence between the feature matrix and each feature matrix in a standard feature matrix library, wherein the standard feature matrix library comprises feature matrices corresponding to different types of textiles; and when the confidence coefficient is maximum, the corresponding characteristic matrix in the standard characteristic matrix library is the optimal matrix, and the class of the textile corresponding to the optimal matrix is the class of the waste textile.
The optimal area image with the most obvious surface texture information of the waste textiles is obtained in the step S300, so that the characteristic parameters of the waste textiles can be extracted based on the optimal area image, and the categories of the waste textiles can be judged according to the characteristic parameters. And performing multiple filtering processing on the optimal region image to obtain multiple filtering images, wherein pixel values of pixel points in each filtering image form a one-dimensional eigenvector, and constructing a matrix according to the eigenvectors corresponding to the multiple filtering images to obtain the eigenvectors corresponding to all the optimal region images.
Specifically, the method for obtaining the feature matrix corresponding to the image in the optimal region includes:
firstly, filtering the obtained optimal area image, and extracting the texture trend of the optimal area image to obtain a corresponding filtered image; in the embodiment of the invention, a Gabor filter is adopted to filter the image, in order to ensure the accuracy of extracting the texture information of the optimal region image, a plurality of Gabor filters with different parameters are adopted to filter the optimal region image, a plurality of filtered images are obtained, and the size of a specific filtering core is set by an implementer.
Preferably, in the embodiment of the present invention, the size of the filter kernel is set to be
Figure 887225DEST_PATH_IMAGE028
And setting and acquiring 8 filtered images corresponding to the optimal area image.
Then, performing feature flattening on the obtained multiple filtering images, namely flattening and extracting pixel values of pixel points in each filtering image according to rows or columns, and converting the pixel values into a one-dimensional feature vector; by analogy, the feature vectors corresponding to the 8 filtered images can be obtained. Constructing a corresponding feature matrix according to the obtained feature vectors, wherein the feature matrix is used for representing the texture style features of the image of the optimal subregion, and the feature matrix specifically comprises the following steps:
Figure DEST_PATH_IMAGE029
wherein the content of the first and second substances,
Figure 65265DEST_PATH_IMAGE021
representing a feature matrix;
Figure 842729DEST_PATH_IMAGE030
represent the first in the feature matrix
Figure DEST_PATH_IMAGE031
Row column 1 element;
Figure 179513DEST_PATH_IMAGE031
representing the number of feature vectors, embodiments of the invention
Figure 924484DEST_PATH_IMAGE032
The calculation method of each element in the feature matrix is as follows:
Figure DEST_PATH_IMAGE033
wherein the content of the first and second substances,
Figure 640636DEST_PATH_IMAGE034
represents the first in the feature matrix
Figure DEST_PATH_IMAGE035
Go to the first
Figure 725136DEST_PATH_IMAGE036
The elements of the column, each row in the feature matrix in the embodiment of the invention is a feature vector;
Figure DEST_PATH_IMAGE037
is shown as
Figure 504261DEST_PATH_IMAGE035
First of a line
Figure 487260DEST_PATH_IMAGE006
An element, i.e. the first
Figure 69420DEST_PATH_IMAGE035
The first in the feature vector
Figure 821475DEST_PATH_IMAGE006
An element;
Figure 847200DEST_PATH_IMAGE038
is shown as
Figure 301184DEST_PATH_IMAGE036
First of a line
Figure 437768DEST_PATH_IMAGE006
An element, i.e. the first
Figure 293597DEST_PATH_IMAGE036
In the feature vector
Figure 224644DEST_PATH_IMAGE006
An element;
Figure DEST_PATH_IMAGE039
representing the number of elements in each feature vector.
Furthermore, the feature matrix can reflect the texture information of the image of the optimal area, so that the waste textiles can be classified according to the feature matrix, a standard feature matrix library is constructed in advance before classification, and the standard feature matrix library comprises feature matrices corresponding to textiles of different categories; the waste textile classifying method is used for matching and classifying waste textiles to be classified.
The method for constructing the standard feature matrix library in the embodiment of the invention comprises the following steps: obtaining a plurality of categories of textiles according to big data analysis or experience, sampling the textiles of each category to obtain image data corresponding to each textile, and obtaining a characteristic matrix corresponding to each category of textiles based on the method for obtaining the same characteristic matrix corresponding to the waste textiles, so that a standard characteristic matrix library is formed according to the characteristic matrices corresponding to different categories of textiles; in the actual implementation process, the setting of the textile categories and the number of the textile categories are set by the implementer.
And expressing the obtained standard feature matrix library as follows:
Figure 389694DEST_PATH_IMAGE040
wherein the content of the first and second substances,
Figure 579236DEST_PATH_IMAGE020
is shown as
Figure 774725DEST_PATH_IMAGE019
A matrix corresponding to each textile category.
Matching is carried out by calculating confidence coefficients between the feature matrix of the waste textile and the feature matrix in the standard feature matrix library, and the specific method for calculating the confidence coefficient comprises the following steps:
Figure 391520DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 570828DEST_PATH_IMAGE018
indicates the waste textiles belong to the category
Figure 298482DEST_PATH_IMAGE019
The confidence of (2);
Figure 879636DEST_PATH_IMAGE020
is shown as
Figure 998158DEST_PATH_IMAGE019
A matrix corresponding to each category of textiles;
Figure 648451DEST_PATH_IMAGE021
and representing a characteristic matrix corresponding to the waste textiles.
And performing confidence calculation on the feature matrix corresponding to the waste textile and all feature matrices in the standard feature matrix library, selecting the feature matrix with the highest confidence in the feature matrix corresponding to the waste textile and the standard feature matrix library, marking the feature matrix corresponding to the highest confidence as an optimal matrix, wherein the category of the textile corresponding to the optimal matrix is the category of the waste textile to be classified.
In summary, in the embodiment of the present invention, an initial image of a waste textile to be classified is first obtained, the initial image is divided into a plurality of sub-region images by using a superpixel segmentation method, and each sub-region image is further analyzed to obtain a color texture feature corresponding to each sub-region image; then obtaining a gradient value corresponding to each pixel point in each sub-region image so as to obtain a gradient image, obtaining a corresponding color influence degree of the sub-region image according to the gradient image and color texture characteristics corresponding to the sub-region image, wherein when the color influence degree is smaller, the sub-region image is shown to contain more obvious self-texture information of the waste textiles, and therefore, a plurality of sub-region images with smaller color influence degrees are selected as preferred region images for subsequent analysis; calculating channel images respectively corresponding to each optimized area image in red, green and blue channels, calculating pixel variances of all pixel points in each channel image, obtaining a color distribution index of the optimized area image according to the pixel variances respectively corresponding to the three channels, and taking the optimized area image with the minimum color distribution index as an optimal area image which can obviously embody texture information of the surface of the waste textile; and finally, extracting a characteristic matrix corresponding to the optimal area image, matching the characteristic matrix with all characteristic matrixes in a standard characteristic matrix base constructed according to experience, and selecting the textile category corresponding to the characteristic matrix with the highest confidence coefficient as the final category of the waste textile. By the method, the influence of other information on the surface of the waste textile on the texture of the waste textile can be reduced, the waste textile is classified through the characteristics of the optimal region image, and the method has the advantages of high classification precision, high detection speed and the like.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (8)

1. A waste textile classifying method based on image processing is characterized by comprising the following steps:
obtaining an initial image of waste textiles to be classified, wherein the initial image comprises a surface area of the waste textiles; dividing the initial image into a plurality of sub-regions according to a superpixel segmentation method, wherein each sub-region corresponds to a sub-region image;
forming hue binary groups by any two adjacent pixel points in the subregion image, and obtaining the color texture characteristics corresponding to the subregion image according to the occurrence frequency of each hue binary group; obtaining a gradient value of each pixel point in the sub-region image, and obtaining the color influence degree of the sub-region image according to the gradient value and the color texture feature; the sub-region image with the color influence degree within a preset range is a preferred region image;
acquiring the variance of pixel values in each preferred region image, and obtaining the color distribution index of each preferred region image according to the variance, wherein the preferred region image corresponding to the smallest color distribution index is the optimal region image;
acquiring a characteristic matrix corresponding to the optimal area image, and calculating the confidence coefficient between the characteristic matrix and each characteristic matrix in a standard characteristic matrix library, wherein the standard characteristic matrix library comprises characteristic matrices corresponding to different types of textiles; and the characteristic matrix in the standard characteristic matrix library corresponding to the maximum confidence coefficient is an optimal matrix, and the category of the textile corresponding to the optimal matrix is the category of the waste textile.
2. The method of claim 1, wherein the step of forming a color tone binary group by any two adjacent pixels in the sub-region image comprises:
obtaining a tone image corresponding to the subregion image in an HSV color space, wherein the pixel value of each pixel point in the tone image is a tone value, and a binary group is formed according to the tone values corresponding to any two adjacent pixel points in the subregion image and is a tone binary group.
3. The method according to claim 1, wherein the step of deriving the color texture features corresponding to the sub-region image according to the number of occurrences of each of the color tone bigrams comprises:
the color texture feature calculation method comprises the following steps:
Figure 376080DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE003
denotes the first
Figure 704687DEST_PATH_IMAGE004
Color texture features corresponding to the sub-region images;
Figure DEST_PATH_IMAGE005
is shown as
Figure 876124DEST_PATH_IMAGE004
The number of all tonal dyads in the sub-region image;
Figure 396098DEST_PATH_IMAGE006
is shown as
Figure DEST_PATH_IMAGE007
Number of occurrences of a tone doublet.
4. The method according to claim 1, wherein the step of obtaining the color influence degree of the sub-region image according to the gradient values and the color texture features comprises:
the method for calculating the color influence degree comprises the following steps:
Figure 760477DEST_PATH_IMAGE008
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE009
is shown as
Figure 155424DEST_PATH_IMAGE004
The color influence degree corresponding to the sub-region image;
Figure 295550DEST_PATH_IMAGE010
is shown as
Figure 708730DEST_PATH_IMAGE004
In the image of the sub-region
Figure DEST_PATH_IMAGE011
Gradient values corresponding to the pixel points;
Figure 777049DEST_PATH_IMAGE012
is shown as
Figure 867365DEST_PATH_IMAGE004
The number of all pixel points in the sub-region image;
Figure 363462DEST_PATH_IMAGE003
is shown as
Figure 474506DEST_PATH_IMAGE004
And color texture features corresponding to the sub-region images.
5. The method of claim 1, wherein the step of obtaining the variance of pixel values in each of the preferred region images comprises:
acquiring a red channel image, a green channel image and a blue channel image which respectively correspond to the three channels of RGB of each preferred region image; respectively calculating the red variance of pixel values of all pixels in the red channel image, the green variance of pixel values of all pixels in the green channel image and the blue variance of pixel values of all pixels in the blue channel image; the variance of the pixel value in each of the preferred region images includes a red variance, a green variance, and a blue variance.
6. The method according to claim 5, wherein the step of obtaining the color distribution index of each of the preferred region images according to the variance comprises:
the color distribution index calculation method comprises the following steps:
Figure 292771DEST_PATH_IMAGE014
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE015
a color distribution index representing a preferred region image;
Figure 560679DEST_PATH_IMAGE016
representing the corresponding red variance of the preferred region image;
Figure DEST_PATH_IMAGE017
representing the green variance corresponding to the preferred region image;
Figure 49298DEST_PATH_IMAGE018
indicating the corresponding blue variance of the preferred region image.
7. The method according to claim 1, wherein the step of obtaining the feature matrix corresponding to the optimal region image comprises:
and carrying out multiple filtering processing on the optimal region image to obtain multiple filtering images, wherein pixel values of pixel points in each filtering image form a one-dimensional characteristic vector, and constructing a matrix according to the characteristic vectors corresponding to the multiple filtering images to obtain characteristic matrices corresponding to all the optimal region images.
8. The method of claim 1, wherein the step of calculating a confidence level between the feature matrix and each feature matrix in a library of standard feature matrices comprises:
the confidence coefficient calculation method comprises the following steps:
Figure 19659DEST_PATH_IMAGE020
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE021
indicates the waste textiles belong to the category
Figure 780680DEST_PATH_IMAGE022
The confidence of (2);
Figure DEST_PATH_IMAGE023
is shown as
Figure 557006DEST_PATH_IMAGE022
Characteristic matrixes corresponding to the textiles of the individual categories;
Figure 509918DEST_PATH_IMAGE024
and representing a characteristic matrix corresponding to the waste textiles.
CN202210858579.XA 2022-07-21 2022-07-21 Waste textile classification method based on image processing Active CN115082741B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210858579.XA CN115082741B (en) 2022-07-21 2022-07-21 Waste textile classification method based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210858579.XA CN115082741B (en) 2022-07-21 2022-07-21 Waste textile classification method based on image processing

Publications (2)

Publication Number Publication Date
CN115082741A true CN115082741A (en) 2022-09-20
CN115082741B CN115082741B (en) 2023-04-14

Family

ID=83243699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210858579.XA Active CN115082741B (en) 2022-07-21 2022-07-21 Waste textile classification method based on image processing

Country Status (1)

Country Link
CN (1) CN115082741B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218458A (en) * 2023-11-08 2023-12-12 海门市缔绣家用纺织品有限公司 Automatic classification method for decorative textiles based on artificial intelligence

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218458A (en) * 2023-11-08 2023-12-12 海门市缔绣家用纺织品有限公司 Automatic classification method for decorative textiles based on artificial intelligence

Also Published As

Publication number Publication date
CN115082741B (en) 2023-04-14

Similar Documents

Publication Publication Date Title
CN113808138B (en) Artificial intelligence-based wire and cable surface defect detection method
CN114549522A (en) Textile quality detection method based on target detection
CN109829914A (en) The method and apparatus of testing product defect
Almogdady et al. A flower recognition system based on image processing and neural networks
Liu et al. Fabric defect detection based on information entropy and frequency domain saliency
CN109145964B (en) Method and system for realizing image color clustering
CN106934794A (en) Information processor, information processing method and inspection system
CN111899274B (en) Particle size analysis method based on deep learning TEM image segmentation
CN114170418B (en) Multi-feature fusion image retrieval method for automobile harness connector by means of graph searching
CN113888536B (en) Printed matter double image detection method and system based on computer vision
CN114782329A (en) Bearing defect damage degree evaluation method and system based on image processing
CN115082741B (en) Waste textile classification method based on image processing
KR101813223B1 (en) Method and apparatus for detecting and classifying surface defect of image
CN115294116A (en) Method, device and system for evaluating dyeing quality of textile material based on artificial intelligence
CN115272838A (en) Information fusion technology-based marine plankton automatic identification method and system
CN115272350A (en) Method for detecting production quality of computer PCB mainboard
KR100858681B1 (en) Image filter combination generating method for fingerprint image generation
CN116842210B (en) Textile printing texture intelligent retrieval method based on texture features
CN108171683B (en) Cell counting method adopting software for automatic identification
CN108765426A (en) automatic image segmentation method and device
CN116543414A (en) Tongue color classification and tongue redness and purple quantification method based on multi-model fusion
CN111401485A (en) Practical texture classification method
CN115937075A (en) Texture fabric flaw detection method and medium based on unsupervised mode
CN104156696B (en) Bi-directional-image-based construction method for quick local changeless feature descriptor
CN111476253B (en) Clothing image classification method, device and equipment and image classification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant