CN116842210A - Textile printing texture intelligent retrieval method based on texture features - Google Patents

Textile printing texture intelligent retrieval method based on texture features Download PDF

Info

Publication number
CN116842210A
CN116842210A CN202311124395.1A CN202311124395A CN116842210A CN 116842210 A CN116842210 A CN 116842210A CN 202311124395 A CN202311124395 A CN 202311124395A CN 116842210 A CN116842210 A CN 116842210A
Authority
CN
China
Prior art keywords
feature text
texture
image
feature
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311124395.1A
Other languages
Chinese (zh)
Other versions
CN116842210B (en
Inventor
曾国云
赵慧利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Yusen Home Textile Technology Co ltd
Original Assignee
Nantong Yusen Home Textile Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Yusen Home Textile Technology Co ltd filed Critical Nantong Yusen Home Textile Technology Co ltd
Priority to CN202311124395.1A priority Critical patent/CN116842210B/en
Publication of CN116842210A publication Critical patent/CN116842210A/en
Application granted granted Critical
Publication of CN116842210B publication Critical patent/CN116842210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5846Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using extracted text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent retrieval method for textile printing textures based on texture features, which comprises the following steps: collecting sample printing texture images of textiles for pretreatment; performing texture image separation; extracting features of the separated texture images by using a gray level symbiotic matrix method, and constructing a feature text library; obtaining the printing texture of the textile to be searched; and performing similarity matching on the obtained characteristic text to be searched and a characteristic text library, and selecting a sample printing texture image of the textile with the highest similarity as a search result. According to the method, the sample printing texture image of the textile is collected to perform the combination of pretreatment, texture image separation, feature extraction, similarity matching and the like, so that the sample image similar to the printing texture of the textile to be searched can be quickly searched, the intelligent searching efficiency of the printing texture of the textile is improved, and efficient and accurate searching of the printing texture of the textile is realized.

Description

Textile printing texture intelligent retrieval method based on texture features
Technical Field
The invention relates to the technical field of image processing analysis, in particular to an intelligent retrieval method for textile printing textures based on texture features.
Background
Along with the continuous progress of science and technology and society, the demands of people for clothing are also continuously increasing. Textile products are used as an important component of clothing, and the styles and floral designs of the textile products are continuously changed and innovated. Many textile enterprises have thousands to tens of thousands of patterns, and thousands of new patterns may be produced every day. However, the conventional sample retrieval method is inefficient, consumes a lot of manpower and material resources, and cannot meet the requirements. To improve efficiency and accuracy, textile manufacturers have begun to utilize computer technology and image retrieval technology to build automated textile pattern retrieval systems. By means of such a system, a specific textile pattern can be retrieved quickly and efficiently. The computer image retrieval technology can perform feature extraction and similarity calculation on the textile patterns, so that automatic sample retrieval is realized.
With the continuous development of computer technology and image processing technology, image retrieval is widely applied in many fields including image library management, medical image analysis, intelligent monitoring, commodity recommendation and the like. In the future, image retrieval technology will continue to advance and evolve, providing us with more accurate, rapid, and intelligent image search and recommendation services. Currently, for textile printing, textile printing texture images are relatively irregular, and conventional feature extraction methods typically focus on local information of the images, such as texture, shape, and the like. For more complex and irregular textile printing texture images, the extraction of local texture information may be affected by factors such as image noise, interference, deformation and the like, so that extracted features are inaccurate or unstable, and the accuracy of textile printing texture recognition is reduced.
For the problems in the related art, no effective solution has been proposed at present.
Disclosure of Invention
Aiming at the problems in the related art, the invention provides an intelligent retrieval method for textile printing textures based on texture features, which aims to overcome the technical problems in the prior related art.
For this purpose, the invention adopts the following specific technical scheme:
an intelligent retrieval method for textile printing textures based on texture features comprises the following steps:
s1, collecting sample printing texture images of textiles for pretreatment;
s2, performing texture image separation on the preprocessed printing texture image by using an improved difference box dimension algorithm;
s3, extracting features of the separated texture images by using a gray level symbiotic matrix method, and constructing a feature text library;
s4, acquiring printing textures of textiles to be searched, sequentially preprocessing, separating texture images and extracting features, and representing the extracted features in a text form to obtain a text of the features to be searched;
and S5, performing similarity matching on the obtained feature text to be searched and a feature text library, and selecting a sample printing texture image of the textile with the highest similarity as a search result.
As a preferred embodiment, the collecting sample print texture image of the textile for pretreatment comprises the steps of:
s11, collecting printing texture images of textiles through a camera shooting technology;
s12, performing image cleaning treatment on the acquired printing texture image, including noise removal, image artifact elimination and image brightness balance treatment;
s13, performing RGB color space conversion on the print texture image after the cleaning treatment to obtain a gray image;
s14, performing image enhancement processing on the obtained gray image through histogram equalization to obtain an enhanced printing texture gray image.
As a preferred embodiment, the texture image separation of the preprocessed printed texture image using the modified differential box dimension algorithm comprises the steps of:
s21, selecting one coordinate point in the enhanced printing texture gray level image, and defining a window area image with the size of M multiplied by M by taking the coordinate point as a midpoint;
s22, defining a mesh size S and expanding it into a window area image of (s×m) × (s×m), and 1< S < M/2, s=2, 4,8 …, and calculating a scale value r=s/M;
s23, dividing the expanded window area image into sxs overlapping grids, and calculating the difference box dimension n of each grid r (i,j);
S24, calculating the number N (r) of boxes under the scale r in the MxM window area, adopting a least square method to fit a straight line log N (r) -log (1/r), and taking the slope of the straight line as a fractal dimension D;
s25, according to the fractal dimension D, the printing texture image area is subjected to area segmentation by using a threshold segmentation method.
As a preferred embodiment, the expanded window is divided into sxs overlapping meshes, and the difference box dimension n of each mesh is calculated r (i, j) comprising the steps of:
s231, taking the expanded window area image as a three-dimensional space or a curved surface, enabling an x axis and a y axis to represent plane positions, and enabling a z axis to represent gray values;
s232, dividing an xy plane into a plurality of sxs stacked grids;
s233, calculating gray value differences between adjacent pixel points in each S multiplied by S grid to obtain a differential sequence, and converting the differential sequence into a binary sequence;
s234, calculating the box dimension of the binary sequence by using an autocorrelation function method, and taking the calculation result as the difference box dimension n of each grid r (i, j) wherein n r (i, j) =h-k+1, where i denotes a row index of the grid, j denotes a column index of the grid, h denotes a position where a gray-level highest value in the (i, j) th grid of the expanded window area image falls, and k denotes a position where a gray-level lowest value in the (i, j) th grid of the expanded window area image falls.
As a preferred embodiment, the calculation formula for calculating the number of boxes N (r) at the scale r in the mxm window area is:
wherein N is r Representing the number of boxes at the scale r within the mxm window area;
n r (i, j) represents the differential box dimension of each grid;
s represents the total number of grids within the mxm window area.
As a preferred embodiment, the feature extraction of the separated texture image by using the gray level co-occurrence matrix method, and the construction of the feature text library comprise the following steps:
s31, defining parameters of a gray level co-occurrence matrix, wherein the parameters comprise gray level numbers, distances and angles, the gray level numbers represent the number of gray levels in an image, the distances represent the distances among pixels considered in calculating the co-occurrence matrix, and the angles represent the relative position relations among pixel pairs used for calculating the co-occurrence matrix;
s32, for each pixel in the separated texture image, calculating a gray level co-occurrence matrix between each pixel and adjacent pixels;
s33, extracting texture image features from the gray level co-occurrence matrix;
s34, representing the extracted texture features in the form of texts, enabling each text to correspond to a feature vector of a texture image, and constructing a feature text library.
In a preferred embodiment, the matching of the similarity between the obtained feature text to be searched and the feature text library, and selecting the sample print texture image of the textile with the highest similarity as the search result includes the following steps:
s51, performing word segmentation on each feature text in the feature text to be retrieved and the feature text library respectively;
s52, calculating TF-IDF values of phrases in the feature text to be searched and each feature text in the feature text library respectively;
s53, calculating origin distances of phrases in each feature text of the feature text to be searched and the feature text library respectively based on the TF-IDF value obtained by calculation;
s54, calculating the similarity of the word group vectors in each feature text of the feature text to be searched and each feature text of the feature text library through a cosine similarity function.
As a preferred embodiment, the calculating the phrase TF-IDF of each feature text in the feature text to be retrieved and the feature text library, respectively, includes the following steps:
s521, counting the total number M of phrases in each feature text in the feature text to be searched and the feature text library;
s522, counting the total number L of phrases in each feature text in the feature text to be searched and the feature text library respectively;
s523, counting the occurrence times of each phrase in each feature text in the feature text to be searched and the feature text library;
s524, calculating the word frequency TF and the inverse frequency IDF of each phrase respectively, and calculating the TF-IDF value of each phrase according to the word frequency TF and the inverse frequency IDF.
As a preferred embodiment, the calculation formula for calculating the origin distance of the phrase in each feature text of the feature text to be searched and the feature text library based on the TF-IDF value obtained by calculation is as follows:
wherein R represents the distance between the original points of the phrase;
n represents the number of times each phrase appears in its feature text;
TF represents the word frequency in the text of each phrase;
IDF represents the inverse frequency in the text of each phrase;
k represents a phrase frequency correlation parameter, the value of which is related to the total number L of phrases and is 100;
h represents the phrase inverse frequency correlation parameter, the value of which is related to the weight value of the phrase and is 1.
As a preferred embodiment, the calculating the similarity of the word group vector in each feature text of the feature text to be searched and the feature text library through the cosine similarity function includes the following steps:
s541, sorting according to the sequence from big to small according to the original point distance of the phrase in each feature text of the feature text to be searched and the feature text library, and selecting the phrase of the first half section as a key phrase;
s542, carrying out normalization processing according to the feature text to be searched and the feature vector corresponding to the keyword group in each feature text of the feature text library;
s543, calculating an inner product between the feature vector of the feature text to be searched and the feature vector of each feature text in the feature text library;
s544, calculating the modular length of the feature vector of the feature text to be searched and the modular length of the feature vector of each feature text in the feature text library;
s545, calculating the similarity between the feature text to be searched and each feature text in the feature text library by using a cosine similarity formula.
The beneficial effects of the invention are as follows:
1. according to the method, the sample printing texture image of the textile is collected to perform the combination of pretreatment, texture image separation, feature extraction, similarity matching and the like, so that the sample image similar to the printing texture of the textile to be searched can be quickly searched, the intelligent searching efficiency of the printing texture of the textile is improved, and efficient and accurate searching of the printing texture of the textile is realized.
2. The invention can reduce the interference and noise in the image through noise removal, image artifact elimination and image brightness balance processing, so that the texture pattern is clearer and more visible, the storage and calculation complexity of data can be reduced by converting the printing texture image into the gray image, the gray image is more suitable for extracting texture features, the contrast and detail display of the image can be improved through histogram equalization, the texture pattern is more outstanding, the subsequent feature extraction and similarity calculation are facilitated, and a more accurate and reliable data basis can be provided for subsequent texture retrieval and analysis.
3. According to the invention, by improving the difference box dimension algorithm, the texture image separation can be carried out on the preprocessed printing texture image, namely the texture image is separated from the background image, so that the texture characteristics can be better highlighted, and the proper scale can be selected for texture characteristic extraction by defining the grid size s and calculating the scale value r. The extraction of different scales can capture different details and structural features of textures, so that more comprehensive and accurate texture description is obtained, the complexity and fractal features of the textures can be quantified by calculating the difference box dimension of each grid, the difference box dimension reflects the local gray level change and the spatial structure in a texture image and can be used as a measure of the texture features, and therefore useful information and guidance can be provided for subsequent texture analysis and application.
4. According to the invention, through word segmentation processing and TF-IDF calculation on the characteristics stored in the text form, text characteristics can be effectively extracted, and the similarity between texts can be calculated; the textile sample most similar to the characteristic text to be searched can be quickly and accurately found, and the text is converted into the characteristic vector representation through calculating the origin distance and normalization processing of the phrase; therefore, the similarity between texts can be conveniently calculated, sorting and screening can be carried out, the similarity between text vectors can be measured by using a cosine similarity formula, and the most similar textile sample can be found to serve as a search result, so that the search effect of the textile sample printing texture image is improved, the workload of manual screening and matching is reduced, and the working efficiency and accuracy are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a texture feature based intelligent retrieval method for textile printing textures according to an embodiment of the present invention.
Detailed Description
For the purpose of further illustrating the various embodiments, the present invention provides the accompanying drawings, which are a part of the disclosure of the present invention, and which are mainly used to illustrate the embodiments and, together with the description, serve to explain the principles of the embodiments, and with reference to these descriptions, one skilled in the art will recognize other possible implementations and advantages of the present invention, wherein elements are not drawn to scale, and like reference numerals are generally used to designate like elements.
According to the embodiment of the invention, an intelligent retrieval method for textile printing textures based on texture features is provided.
The invention will be further described with reference to the accompanying drawings and detailed description, as shown in fig. 1, a textile printing texture intelligent retrieval method based on texture features according to an embodiment of the invention, the method comprises the following steps:
s1, collecting sample printing texture images of textiles for pretreatment.
It should be noted that various noises, such as high frequency noise, low frequency noise, or color noise, may be introduced into the image capturing apparatus during the capturing process, and an artifact or a problem of uneven illumination may occur in the image, which may interfere with the observation and analysis of the texture pattern. The quality and the accuracy of the image can be improved by carrying out image cleaning treatment on the acquired printing texture image, so that the subsequent texture analysis and treatment are more reliable and effective. This is very important in the quality inspection, design and production fields of the textile industry.
Specifically, the pretreatment of collecting the sample printing texture image of the textile comprises the following steps:
s11, acquiring a printing texture image of the textile by a camera shooting technology.
S12, performing image cleaning processing on the acquired printing texture image, including noise removal, image artifact elimination and image brightness balance processing.
It should be noted that, noise removal is to use image processing algorithms, such as mean filtering, median filtering, gaussian filtering, and the like, to reduce noise in an image. These filters can smooth the image and remove noise so that the grain pattern is more clearly visible.
Image artifact cancellation is the elimination of artifacts or ghosts in an image if such artifacts are present, using deconvolution algorithms or image restoration techniques. These techniques can restore details and texture in the image, making the image more accurately reflect the printed texture of the textile.
The image brightness balance processing is to adjust the brightness and contrast of the image so that the texture pattern is more consistent and clear under different illumination conditions. This may be achieved using techniques such as histogram equalization, adaptive enhancement, etc.
S13, performing RGB color space conversion on the print texture image after the cleaning treatment to obtain a gray image.
It should be noted that the RGB color space represents color components of three channels of red, green, and blue of an image, and RGB color space conversion is a process of converting an RGB image into other color spaces.
S14, performing image enhancement processing on the obtained gray image through histogram equalization to obtain an enhanced printing texture gray image.
The gray level image can be subjected to image enhancement processing through histogram equalization, so that the contrast ratio and detail display of the image are improved. The following is a step of histogram equalization of a gray image:
calculating a histogram of the original gray image: the number of pixels per gray level is counted and a cumulative distribution function of the number of pixels is calculated.
Each gray level is mapped to a new gray level according to the cumulative distribution function. This mapping function will cause the gray levels to be evenly distributed throughout the image.
A mapping function is applied to replace each pixel value in the original gray scale image with a corresponding new gray scale level.
Through histogram equalization, the gray scale range of the original gray scale image can be expanded, so that the brightness is more uniformly distributed, and the details and the contrast in the image are enhanced.
S2, performing texture image separation on the preprocessed printing texture image by using an improved difference box dimension algorithm.
Specifically, the texture image separation of the preprocessed printing texture image by using the improved difference box dimension algorithm comprises the following steps:
s21, selecting one coordinate point in the enhanced printing texture gray level image, and defining a window area image with the size of M multiplied by M by taking the coordinate point as a midpoint.
One coordinate point is selected as the center point. This coordinate point may be a pixel location in the image. The width and height of the window are set to M with the coordinate point as the center. And (3) ensuring that the window area does not exceed the boundary of the image, and extracting a window area image with a defined size of M multiplied by M from the enhanced printing texture gray level image by using a method for extracting the window area.
S22, defining a mesh size S and expanding it into a window area image of (s×m) × (s×m), and 1< S < M/2, s=2, 4,8 …, and calculating a scale value r=s/M.
It should be noted that, according to the given window size M, a mesh size s is selected that meets the condition. The mesh size s should be a power of 2 and satisfy 1< s < M/2. For example, s=2, 4,8, etc. may be selected. The scale value r represents the proportional relationship between the size of each grid in the window area image and the window size M.
S23, dividing the expanded window area image into sxs overlapping grids, and calculating the difference box dimension n of each grid r (i,j)。
Specifically, the expanded window is divided into sxs overlapping grids, and the difference box dimension n of each grid is calculated r (i, j) comprising the steps of:
s231, taking the expanded window area image as a three-dimensional space or a curved surface, enabling an x axis and a y axis to represent plane positions and enabling a z axis to represent gray values.
S232, dividing the xy plane into a plurality of S multiplied by S stacked grids.
S233, calculating gray value differences between adjacent pixel points in each S multiplied by S grid to obtain a differential sequence, and converting the differential sequence into a binary sequence.
It should be noted that the steps for converting the differential sequence into the binary sequence are as follows:
a threshold is selected for classifying the values in the differential sequence into two classes: 1 greater than or equal to the threshold value, and 0 less than the threshold value.
For each value in the differential sequence, its magnitude relation to the threshold is compared and the result is converted into a binary sequence.
S234, calculating the box dimension of the binary sequence by using an autocorrelation function method, and taking the calculation result as the difference box dimension n of each grid r (i, j) wherein n r (i, j) =h-k+1, where i denotes a row index of the grid, j denotes a column index of the grid, h denotes a position where a gray-level highest value in the (i, j) th grid of the expanded window area image falls, and k denotes a position where a gray-level lowest value in the (i, j) th grid of the expanded window area image falls.
It should be noted that the autocorrelation function method is a method for measuring the similarity and periodicity of the sequences themselves. It evaluates the characteristics of the sequence by comparing the correlation between the sequence and its own different delayed versions based on the calculation of the autocorrelation function. The autocorrelation function is a function that measures the correlation between the sequence and its own delayed version.
S24, calculating the box number N (r) under the scale r in the MxM window area, adopting a least square method to fit a straight line log N (r) -log (1/r), and taking the slope of the straight line as a fractal dimension D.
Specifically, the calculation formula for calculating the number of boxes N (r) under the scale r in the mxm window area is as follows:
wherein N is r Representing the number of boxes at the scale r, n, within the MxM window region r (i, j) represents the differential box dimension of each grid, and S represents the total number of grids within the mxm window area.
It should be noted that, the least square method is a common fitting method, and the best fitting result can be obtained by minimizing the error between the observed value and the fitting value.
S25, according to the fractal dimension D, the printing texture image area is subjected to area segmentation by using a threshold segmentation method.
It should be noted that threshold segmentation is a simple and commonly used image segmentation method, which determines the region to which a pixel belongs based on a comparison of the gray value of the pixel with a preset threshold. The following is a step of performing region segmentation using a threshold segmentation method:
an appropriate threshold is selected. According to the characteristic of the fractal dimension D, a proper threshold value can be selected for segmentation by combining the gray distribution characteristics of the image.
Each pixel in the image is traversed and the gray value is compared to a threshold value.
If the gray value of a pixel is greater than the threshold value, it is marked as the object region. If the gray value is equal to or less than the threshold value, it is marked as a background area.
Based on the result of the marking, a segmented image is obtained, wherein the object area represents the area of the printed texture image.
And S3, extracting features of the separated texture images by using a gray level symbiotic matrix method, and constructing a feature text library.
Specifically, the feature extraction of the separated texture image by using the gray level co-occurrence matrix method, and the construction of the feature text library comprise the following steps:
s31, defining parameters of a gray level co-occurrence matrix, wherein the parameters comprise gray level numbers, distances and angles, the gray level numbers represent the number of gray levels in an image, the distances represent the distances among pixels considered in calculating the co-occurrence matrix, and the angles represent the relative position relations among pixel pairs used for calculating the co-occurrence matrix.
The number of gray levels is appropriately selected according to the number of gray levels of the image. In general, 256 gradations can be selected. Determining the distance and angle: according to the characteristics and requirements of the image, the distance between adjacent pixels and the angle when the co-occurrence matrix is calculated are selected. Common distances may be 1 or 2, and angles may be 0 °, 45 °, 90 ° and 135 °.
S32, for each pixel in the separated texture image, calculating a gray level co-occurrence matrix between each pixel and adjacent pixels.
It should be noted that the co-occurrence matrix represents statistical information of the frequency of occurrence of the pixel pairs of different gray levels.
S33, extracting texture image features from the gray level co-occurrence matrix.
Textile printing texture features include: texture direction: textile print patterns generally have a pronounced directionality. A direction filter (e.g., gabor filter) may be used to extract texture direction information.
Texture frequency: the textures in textile printed patterns typically have different frequency components. The texture frequency information may be extracted using a Discrete Wavelet Transform (DWT) or fourier transform or the like.
Texture shape: the texture shape of the textile print can be varied, such as dots, lines, networks, petals, etc. Shape descriptors (e.g., boundary descriptors, region descriptors) may be used to describe texture shapes.
S34, representing the extracted texture features in the form of texts, enabling each text to correspond to a feature vector of a texture image, and constructing a feature text library.
S4, acquiring printing textures of the textiles to be searched, sequentially carrying out pretreatment, texture image separation and feature extraction, and representing the extracted features in a text form to obtain the text of the features to be searched.
And S5, performing similarity matching on the obtained feature text to be searched and a feature text library, and selecting a sample printing texture image of the textile with the highest similarity as a search result.
Specifically, the step of performing similarity matching on the obtained feature text to be searched and a feature text library, and selecting a sample printing texture image of the textile with the highest similarity as a search result comprises the following steps:
s51, performing word segmentation on the feature text to be searched and each feature text in the feature text library.
S52, calculating TF-IDF values of phrases in the feature text to be searched and each feature text in the feature text library respectively.
It should be noted that TF-IDF (Term Frequency-Inverse Document Frequency) is a statistical method for measuring importance of words in text, and is commonly used in the fields of information retrieval and text mining. TF-IDF calculates the weights of words by combining word frequency (TF) and Inverse Document Frequency (IDF).
Specifically, the calculating of the phrase TF-IDF of each feature text in the feature text to be searched and the feature text library respectively includes the following steps:
s521, counting the total number M of phrases in each feature text in the feature text to be searched and the feature text library.
S522, counting the total number L of phrases in each feature text in the feature text to be searched and the feature text library respectively.
S523, counting the occurrence times of each phrase in the feature text to be searched and each feature text in the feature text library.
S524, calculating the word frequency TF and the inverse frequency IDF of each phrase respectively, and calculating the TF-IDF value of each phrase according to the word frequency TF and the inverse frequency IDF.
Specifically, the word frequency TF of each phrase is calculated by the formula tf=the number of occurrences of the phrase in the feature text/the total phrase number L in the feature text.
The inverse frequency IDF of the phrase is calculated as idf=log (M/(1+the number of feature texts of the phrase appearing in the feature text)).
The TF-IDF value of each phrase is calculated as TF-idf=tf.
S53, calculating the original point distance of the phrase in each feature text of the feature text to be searched and the feature text library based on the TF-IDF value obtained by calculation.
Specifically, the calculation formula for calculating the original point distance of the phrase in each feature text of the feature text to be searched and the feature text library based on the TF-IDF value obtained by calculation is as follows:
wherein R represents the original point distance of the phrase, n represents the frequency of occurrence of each phrase in the characteristic text, TF represents the word frequency of each phrase in the characteristic text, IDF represents the inverse frequency of each phrase in the characteristic text, K represents the phrase frequency correlation parameter, the value of the parameter is related to the total number L of the phrases, the value of the parameter is 100, H represents the phrase inverse frequency correlation parameter, the value of the parameter is related to the weight of the phrases, and the value of the parameter is 1.
S54, calculating the similarity of the word group vectors in each feature text of the feature text to be searched and each feature text of the feature text library through a cosine similarity function.
Specifically, the calculating the similarity of the word group vector in each feature text of the feature text to be searched and the feature text library through the cosine similarity function includes the following steps:
s541, sorting according to the sequence from big to small according to the original point distance of the phrase in each feature text of the feature text to be searched and the feature text library, and selecting the phrase of the first half section as a key phrase.
S542, carrying out normalization processing according to the feature vectors corresponding to the keyword groups in the feature texts to be searched and each feature text in the feature text library.
It should be noted that, the normalization process may enable the numerical range of the feature vector to be between 0 and 1, so as to facilitate subsequent similarity calculation. Through normalization processing, the influence of the feature vector length difference between different feature texts on similarity calculation can be eliminated, and the similarity between the feature texts can be compared more accurately.
S543, calculating an inner product between the feature vector of the feature text to be searched and the feature vector of each feature text in the feature text library.
It should be noted that, the feature vector of the feature text to be retrieved and the feature vector of each feature text in the feature text library are multiplied according to the elements at the corresponding positions. And adding the product results to obtain the value of the inner product.
S544, calculating the modular length of the feature vector of the feature text to be searched and the modular length of the feature vector of each feature text in the feature text library.
It should be noted that, the modular length of the feature vector of the feature text to be retrieved and the modular length of the feature vector of each feature text in the feature text library may be calculated, and a Euclidean norm (Euclidean norm) or a definition called 2-norm (2-norm) may be used.
S545, calculating the similarity between the feature text to be searched and each feature text in the feature text library by using a cosine similarity formula.
It should be noted that, the cosine similarity can measure the similarity of the directions of the two vectors, and the formula is as follows: cosine similarity = inner product/(modulo length 1 x modulo length 2), where the inner product represents the inner product between the feature vector of the feature text to be retrieved and the feature vector of a certain feature text in the feature text library, modulo length 1 represents the modulo length of the feature vector of the feature text to be retrieved, modulo length 2 represents the modulo length of the feature vector of a certain feature text in the feature text library, the cosine similarity has a value ranging from-1 to 1, and closer to 1 represents higher similarity and closer to-1 represents lower similarity.
In summary, by means of the technical scheme, the sample printing texture image of the textile is collected to perform the steps of preprocessing, texture image separation, feature extraction, similarity matching and the like, so that the sample image similar to the printing texture of the textile to be searched can be quickly searched, the intelligent searching efficiency of the printing texture of the textile is improved, and efficient and accurate searching of the printing texture of the textile is realized; according to the invention, through noise removal, image artifact elimination and image brightness balance processing, interference and noise in an image can be reduced, so that a texture pattern is clearer and more visible, the storage and calculation complexity of data can be reduced by converting a printing texture image into a gray image, the gray image is more suitable for extracting texture features, the contrast and detail display of the image can be improved through histogram equalization, the texture pattern is more outstanding, subsequent feature extraction and similarity calculation are facilitated, and a more accurate and reliable data basis can be provided for subsequent texture retrieval and analysis; according to the invention, by improving the difference box dimension algorithm, the texture image separation can be carried out on the preprocessed printing texture image, namely the texture image is separated from the background image, so that the texture characteristics can be better highlighted, and the proper scale can be selected for texture characteristic extraction by defining the grid size s and calculating the scale value r. Different details and structural features of textures can be captured by extracting different scales, so that more comprehensive and accurate texture description is obtained, the complexity and fractal features of the textures can be quantified by calculating the difference box dimension of each grid, the difference box dimension reflects the local gray level change and the spatial structure in a texture image and can be used as a measure of the texture features, and therefore useful information and guidance can be provided for subsequent texture analysis and application; according to the invention, through word segmentation processing and TF-IDF calculation on the characteristics stored in the text form, the text characteristics can be effectively extracted, and the similarity between texts can be calculated. Therefore, the textile sample most similar to the characteristic text to be searched can be quickly and accurately found, and the text is converted into the characteristic vector representation through calculating the origin distance and normalization processing of the phrase. Therefore, the similarity between texts can be conveniently calculated, sorting and screening can be carried out, the similarity between text vectors can be measured by using a cosine similarity formula, and the most similar textile sample can be found to serve as a search result, so that the search effect of the textile sample printing texture image is improved, the workload of manual screening and matching is reduced, and the working efficiency and accuracy are improved.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (10)

1. An intelligent retrieval method for textile printing textures based on texture features is characterized by comprising the following steps:
s1, collecting sample printing texture images of textiles for pretreatment;
s2, performing texture image separation on the preprocessed printing texture image by using an improved difference box dimension algorithm;
s3, extracting features of the separated texture images by using a gray level symbiotic matrix method, and constructing a feature text library;
s4, acquiring printing textures of textiles to be searched, sequentially preprocessing, separating texture images and extracting features, and representing the extracted features in a text form to obtain a text of the features to be searched;
and S5, performing similarity matching on the obtained feature text to be searched and a feature text library, and selecting a sample printing texture image of the textile with the highest similarity as a search result.
2. The intelligent retrieval method for textile printing textures based on texture features according to claim 1, wherein the pretreatment of collecting sample printing texture images of textiles comprises the following steps:
s11, collecting printing texture images of textiles through a camera shooting technology;
s12, performing image cleaning treatment on the acquired printing texture image, including noise removal, image artifact elimination and image brightness balance treatment;
s13, performing RGB color space conversion on the print texture image after the cleaning treatment to obtain a gray image;
s14, performing image enhancement processing on the obtained gray image through histogram equalization to obtain an enhanced printing texture gray image.
3. A method for intelligent retrieval of texture features based on textile printing texture according to claim 1, wherein the texture image separation of the pre-processed printing texture image using the modified differential box dimension algorithm comprises the steps of:
s21, selecting one coordinate point in the enhanced printing texture gray level image, and defining a window area image with the size of M multiplied by M by taking the coordinate point as a midpoint;
s22, defining a mesh size S and expanding it into a window area image of (s×m) × (s×m), and 1< S < M/2, s=2, 4,8 …, and calculating a scale value r=s/M;
s23, dividing the expanded window area image into sxs overlapping grids, and calculating the difference box dimension n of each grid r (i,j);
S24, calculating the number N (r) of boxes under the scale r in the MxM window area, adopting a least square method to fit a straight line log N (r) -log (1/r), and taking the slope of the straight line as a fractal dimension D;
s25, according to the fractal dimension D, the printing texture image area is subjected to area segmentation by using a threshold segmentation method.
4. A method for intelligently retrieving textile printing textures based on texture features as claimed in claim 3, wherein the expanded window is divided into sxs overlapping grids, and the difference box dimension n of each grid is calculated r (i, j) comprising the steps of:
s231, taking the expanded window area image as a three-dimensional space or a curved surface, enabling an x axis and a y axis to represent plane positions, and enabling a z axis to represent gray values;
s232, dividing an xy plane into a plurality of sxs stacked grids;
s233, calculating gray value differences between adjacent pixel points in each S multiplied by S grid to obtain a differential sequence, and converting the differential sequence into a binary sequence;
s234, calculating the box dimension of the binary sequence by using an autocorrelation function method, and taking the calculation result as the difference box dimension n of each grid r (i, j) wherein n r (i, j) =h-k+1, where i denotes a row index of the grid, j denotes a column index of the grid, h denotes a position where a gray-level highest value in the (i, j) th grid of the expanded window area image falls, and k denotes a position where a gray-level lowest value in the (i, j) th grid of the expanded window area image falls.
5. The intelligent retrieval method for textile printing textures based on texture features according to claim 1, wherein the calculation formula for calculating the number of boxes N (r) under the scale r in the mxm window area is:
wherein N is r Representing the number of boxes at the scale r within the mxm window area;
n r (i, j) represents the differential box dimension of each grid;
s represents the total number of grids within the mxm window area.
6. The intelligent retrieval method for textile printing textures based on texture features according to claim 1, wherein the feature extraction of the separated texture images by using a gray level symbiotic matrix method and the construction of a feature text library comprise the following steps:
s31, defining parameters of a gray level co-occurrence matrix, wherein the parameters comprise gray level numbers, distances and angles, the gray level numbers represent the number of gray levels in an image, the distances represent the distances among pixels considered in calculating the co-occurrence matrix, and the angles represent the relative position relations among pixel pairs used for calculating the co-occurrence matrix;
s32, for each pixel in the separated texture image, calculating a gray level co-occurrence matrix between each pixel and adjacent pixels;
s33, extracting texture image features from the gray level co-occurrence matrix;
s34, representing the extracted texture features in the form of texts, enabling each text to correspond to a feature vector of a texture image, and constructing a feature text library.
7. The intelligent retrieval method for textile printing textures based on texture features according to claim 1, wherein the steps of performing similarity matching on the obtained feature text to be retrieved and a feature text library, and selecting a sample printing texture image of the textile with the highest similarity as a retrieval result comprise the following steps:
s51, performing word segmentation on each feature text in the feature text to be retrieved and the feature text library respectively;
s52, calculating TF-IDF values of phrases in the feature text to be searched and each feature text in the feature text library respectively;
s53, calculating origin distances of phrases in each feature text of the feature text to be searched and the feature text library respectively based on the TF-IDF value obtained by calculation;
s54, calculating the similarity of the word group vectors in each feature text of the feature text to be searched and each feature text of the feature text library through a cosine similarity function.
8. The intelligent retrieval method for textile printing textures based on texture features according to claim 7, wherein the calculation of the phrase TF-IDF of each feature text in the feature text to be retrieved and the feature text library respectively comprises the following steps:
s521, counting the total number M of phrases in each feature text in the feature text to be searched and the feature text library;
s522, counting the total number L of phrases in each feature text in the feature text to be searched and the feature text library respectively;
s523, counting the occurrence times of each phrase in each feature text in the feature text to be searched and the feature text library;
s524, calculating the word frequency TF and the inverse frequency IDF of each phrase respectively, and calculating the TF-IDF value of each phrase according to the word frequency TF and the inverse frequency IDF.
9. The intelligent retrieval method of textile printing textures based on texture features according to claim 8, wherein the calculation formula for calculating the original point distance of the phrase in each feature text of the feature text to be retrieved and the feature text library based on the calculated TF-IDF value is:
wherein R represents the distance between the original points of the phrase;
n represents the number of times each phrase appears in its feature text;
TF represents the word frequency in the text of each phrase;
IDF represents the inverse frequency in the text of each phrase;
k represents a phrase frequency correlation parameter, the value of which is related to the total number L of phrases and is 100;
h represents the phrase inverse frequency correlation parameter, the value of which is related to the weight value of the phrase and is 1.
10. The intelligent retrieval method for textile printing textures based on texture features according to claim 9, wherein the calculation of the similarity of the word group vectors in each feature text of the feature text to be retrieved and the feature text library through cosine similarity function comprises the following steps:
s541, sorting according to the sequence from big to small according to the original point distance of the phrase in each feature text of the feature text to be searched and the feature text library, and selecting the phrase of the first half section as a key phrase;
s542, carrying out normalization processing according to the feature text to be searched and the feature vector corresponding to the keyword group in each feature text of the feature text library;
s543, calculating an inner product between the feature vector of the feature text to be searched and the feature vector of each feature text in the feature text library;
s544, calculating the modular length of the feature vector of the feature text to be searched and the modular length of the feature vector of each feature text in the feature text library;
s545, calculating the similarity between the feature text to be searched and each feature text in the feature text library by using a cosine similarity formula.
CN202311124395.1A 2023-09-01 2023-09-01 Textile printing texture intelligent retrieval method based on texture features Active CN116842210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311124395.1A CN116842210B (en) 2023-09-01 2023-09-01 Textile printing texture intelligent retrieval method based on texture features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311124395.1A CN116842210B (en) 2023-09-01 2023-09-01 Textile printing texture intelligent retrieval method based on texture features

Publications (2)

Publication Number Publication Date
CN116842210A true CN116842210A (en) 2023-10-03
CN116842210B CN116842210B (en) 2023-12-26

Family

ID=88162079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311124395.1A Active CN116842210B (en) 2023-09-01 2023-09-01 Textile printing texture intelligent retrieval method based on texture features

Country Status (1)

Country Link
CN (1) CN116842210B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218458A (en) * 2023-11-08 2023-12-12 海门市缔绣家用纺织品有限公司 Automatic classification method for decorative textiles based on artificial intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090324026A1 (en) * 2008-06-27 2009-12-31 Palo Alto Research Center Incorporated System and method for finding a picture image in an image collection using localized two-dimensional visual fingerprints
CN105354276A (en) * 2015-10-29 2016-02-24 江南大学 Content-based lace retrieval system
CN107256246A (en) * 2017-06-06 2017-10-17 西安工程大学 PRINTED FABRIC image search method based on convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090324026A1 (en) * 2008-06-27 2009-12-31 Palo Alto Research Center Incorporated System and method for finding a picture image in an image collection using localized two-dimensional visual fingerprints
CN105354276A (en) * 2015-10-29 2016-02-24 江南大学 Content-based lace retrieval system
CN107256246A (en) * 2017-06-06 2017-10-17 西安工程大学 PRINTED FABRIC image search method based on convolutional neural networks

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218458A (en) * 2023-11-08 2023-12-12 海门市缔绣家用纺织品有限公司 Automatic classification method for decorative textiles based on artificial intelligence

Also Published As

Publication number Publication date
CN116842210B (en) 2023-12-26

Similar Documents

Publication Publication Date Title
Gavhale et al. An overview of the research on plant leaves disease detection using image processing techniques
CN108363970B (en) Method and system for identifying fish species
CN110866896B (en) Image saliency target detection method based on k-means and level set super-pixel segmentation
US20110158535A1 (en) Image processing apparatus and image processing method
CN116842210B (en) Textile printing texture intelligent retrieval method based on texture features
CN101059425A (en) Method and device for identifying different variety green tea based on multiple spectrum image texture analysis
CN110245593A (en) A kind of images of gestures extraction method of key frame based on image similarity
CN108829711B (en) Image retrieval method based on multi-feature fusion
Tavoli et al. Weighted PCA for improving Document Image Retrieval System based on keyword spotting accuracy
CN107886539B (en) High-precision gear visual detection method in industrial scene
CN109543723A (en) A kind of image clustering method of robust
CN114170418B (en) Multi-feature fusion image retrieval method for automobile harness connector by means of graph searching
CN109213886B (en) Image retrieval method and system based on image segmentation and fuzzy pattern recognition
CN112633202B (en) Hyperspectral image classification algorithm based on dual denoising combined multi-scale superpixel dimension reduction
CN116246174B (en) Sweet potato variety identification method based on image processing
CN115512123A (en) Multi-period key growth characteristic extraction and time period classification method for hypsizygus marmoreus
CN113033602A (en) Image clustering method based on tensor low-rank sparse representation
CN110533083B (en) Casting defect recognition method based on Adaboost model of SVM
CN116912674A (en) Target detection method and system based on improved YOLOv5s network model under complex water environment
CN109299295B (en) Blue printing layout database searching method
Ghosh et al. A filter ensemble feature selection method for handwritten numeral recognition
CN105844299B (en) A kind of image classification method based on bag of words
CN111127407A (en) Fourier transform-based style migration counterfeit image detection device and method
Rajesh et al. Automatic data acquisition and spot disease identification system in plants pathology domain: agricultural intelligence system in plant pathology domain
CN115082741A (en) Waste textile classifying method based on image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant