CN115330771B - Cloth texture detection method - Google Patents

Cloth texture detection method Download PDF

Info

Publication number
CN115330771B
CN115330771B CN202211249247.8A CN202211249247A CN115330771B CN 115330771 B CN115330771 B CN 115330771B CN 202211249247 A CN202211249247 A CN 202211249247A CN 115330771 B CN115330771 B CN 115330771B
Authority
CN
China
Prior art keywords
texture
sampling
cloth
pixels
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211249247.8A
Other languages
Chinese (zh)
Other versions
CN115330771A (en
Inventor
邹玲玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Juran Textile Co ltd
Original Assignee
Nantong Juran Textile Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Juran Textile Co ltd filed Critical Nantong Juran Textile Co ltd
Priority to CN202211249247.8A priority Critical patent/CN115330771B/en
Publication of CN115330771A publication Critical patent/CN115330771A/en
Application granted granted Critical
Publication of CN115330771B publication Critical patent/CN115330771B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • G06T5/80
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Treatment Of Fiber Materials (AREA)

Abstract

The invention relates to the field of data processing, in particular to a cloth texture detection method, which comprises the following steps: acquiring image information data of the cloth, and based on the image information data; acquiring global distribution characteristics of the gray level image, and constructing a self-adaptive down-sampling Gaussian pyramid; matching of key points is completed based on the Gaussian pyramid, and the qualification rate of the cloth is obtained according to the quantity of parallel lines. Namely, the scheme of the invention can accurately evaluate the quality condition of the cloth.

Description

Cloth texture detection method
Technical Field
The invention relates to the field of data processing, in particular to a cloth texture detection method.
Background
With the development of economic level, the worldwide demand for high-end cloth is increasing. The high end of the cloth is mainly reflected in the complexity of the texture and the coincidence degree required by the template. The artificial cloth texture detection efficiency is low, the effect is influenced by subjective factors, and in the prior art, the detection of cloth texture flaws is completed through a convolutional neural network by means of computer vision. However, due to rich and complex cloth textures, the model parameters are large, the training speed is slow, the generalization performance is weak, the detection performance is poor, and the technology needs to be further optimized.
Disclosure of Invention
In order to solve the above technical problems, an object of the present invention is to provide a method for detecting a texture of a piece of cloth, which adopts the following technical solutions:
the invention provides a cloth texture detection method, which comprises the following steps:
acquiring nested texture images of the cloth, wherein the nested texture images comprise a plurality of production images to be detected, aligning the production images to be detected and preprocessing the production images to obtain gray level images;
acquiring global distribution characteristics of the gray level image, and constructing a self-adaptive downsampling Gaussian pyramid;
matching key points based on the Gaussian pyramid, and obtaining the qualification rate of the cloth according to the quantity of parallel lines;
the specific process of constructing the self-adaptive downsampling Gaussian pyramid comprises the following steps:
performing two-dimensional discrete Fourier transform on the gray level image to obtain a spectrogram, finding a minimum period part in the spectrogram to obtain a single-period pattern, obtaining a minimum external rectangle of the single-period pattern, and translating the minimum external rectangle to obtain a new rectangular region;
performing edge detection on the gray level image corresponding to the minimum circumscribed rectangle to obtain a texture contour, calculating the distance between any two edges, and classifying pixel points to obtain background pixels and texture pixels; further obtaining a preliminary second classification of the new rectangular area;
and based on the preliminary secondary classification of the new rectangular region, self-adaptive down-sampling is carried out after weights are given to different pixels, and a Gaussian pyramid is obtained.
Preferably, the adaptive down-sampling after the weights are given to the different pixels, and the specific process of obtaining the gaussian pyramid is as follows:
determining the mapping relation between the pixel point in the next sampling layer and the pixel point in the previous sampling layer; in the first sampling layer, for each 2 x 2 region, four pixels are given corresponding weights; the region of 2 x 2 was enlarged one turn outward to give a region of 4 x 4: if no newly added texture pixel point appears, the texture pixel in the original region is in an isolated or marginal position, and the given weight is twice of the initial weight; based on the weights of the pixel points, traversing all 2 x 2 areas during the first down-sampling, and reserving the newly sampled layer with the maximum weight among the four pixel points; after one-time down-sampling is completed, updating the pixel weight according to the same principle to prepare for next-time sampling; and stopping down-sampling when the texture information exists in the cloth image.
Preferably, the threshold value of the texture information richness is set as
Figure 100002_DEST_PATH_IMAGE002
When the texture information is rich>
Figure 100002_DEST_PATH_IMAGE004
Above +>
Figure 239982DEST_PATH_IMAGE002
Stopping down-sampling, self-adaptive down-sampling GaussThe pyramid construction is completed; wherein texture information is enriched>
Figure 100002_DEST_PATH_IMAGE006
In the formula (I), the compound is shown in the specification,
Figure 100002_DEST_PATH_IMAGE008
representing the total number of lines of pixels in the image, based on the value of the pixel value>
Figure 100002_DEST_PATH_IMAGE010
Representing the total number of columns of pixels, the denominator representing the total number of pixels, and->
Figure 100002_DEST_PATH_IMAGE012
Is the total number of textures in the mth line.
Preferably, difference and extremum detection is performed based on the adaptively downsampled pyramid;
counting gradient information of pixels in a neighborhood of the key points;
obtaining template textures and a key point descriptor set of a texture image to be detected, calculating the most matched key point of the current key point descriptor in another key point descriptor set by using the Euclidean distance for the key point of each template texture, and connecting the two successfully matched points; and calculating the qualification rate of the cloth according to the number of parallel lines in the connecting line.
The invention has the beneficial effects that:
according to the method, from the aspect of image feature matching, based on the SIFT algorithm, under the scene of high-end cloth texture detection, key points of a template image and a product image are matched, the qualification rate is obtained according to the parallel relation of the key points, and quantitative cloth texture detection is completed. Considering that the texture of high-end cloth has complex nested patterns, the integrity of information can be damaged by a universal down-sampling mode, and the wrong matching of key points is caused; too many pyramid groups bring a large amount of calculation, and a termination condition for downsampling needs to be set. Therefore, the invention constructs the self-adaptive down-sampling Gaussian pyramid according to the global distribution characteristics of the texture, avoids the mistaken deletion of the key points with strong invariance of the texture part and realizes more accurate characteristic point matching.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a block diagram of a method for detecting a texture of a piece of cloth according to the present invention;
fig. 2 is a grayscale image of a high-end piece of cloth.
Detailed Description
To further explain the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, characteristics and effects thereof according to the present invention will be made with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The method mainly aims to construct a self-adaptive down-sampling Gaussian pyramid according to the global distribution characteristics of nested textures, complete key point matching and count the number of parallel lines to obtain the cloth qualification rate.
Specifically, the method for detecting a cloth texture provided by the present invention, as shown in fig. 1, includes the following steps:
step 1, acquiring nested texture images of the cloth, wherein the nested texture images comprise template images and a plurality of production images to be detected, aligning the production images to be detected and carrying out gray level processing to obtain gray level images.
In this embodiment, the nested texture image of the cloth is collected, and the specific process of correcting the angle of the image to be detected and preprocessing the image to be detected is as follows: (1) Due to the fact that shooting angles are various, the collected texture images to be detected are inclined and deviated. And (5) aligning the image through perspective transformation. (2) It can be observed that the textured image of the high-end cloth still includes some irrelevant background factors after being aligned, the image is cut, the uninteresting non-textured partial image is removed and grayed, and the obtained example result is shown in fig. 2.
And 2, constructing a self-adaptive down-sampling Gaussian pyramid according to the global distribution characteristics of the cloth nested texture.
It should be noted that the construction of the gaussian pyramid in the conventional SIFT includes two parts: firstly, the original image is subjected to downsampling processing of deleting even rows and columns, so that the size of the image is changed into one fourth of the original size, and different groups are obtained; secondly, the images with the same size in a group are subjected to Gaussian blur by using different parameters, so that the images have multiple scales of transformation. The Gaussian pyramid constructed in this way has two parameters of group number and layer number. In a scene of high-end cloth nested textures, key points with strong partial invariance can be deleted by mistake in a mode of simply deleting partial rows and columns of an image in a general technology, and in order to ensure the quality of the key points and improve the effect of feature matching, the invention constructs a self-adaptive down-sampling Gaussian pyramid by the characteristics of texture distribution, and the method specifically comprises the following steps:
a) By means of conversion between the space domain and the frequency domain, the global distribution rule of the texture is determined by the monocycle pattern in the high-end cloth-matched nested texture.
b) And classifying the pixels by combining the global distribution rule of the texture to construct a targeted self-adaptive down-sampling pyramid.
In the step a), since complex periodic geometric nested textures are frequently found in high-end cloth, the high-end cloth can be converted from a space domain to a frequency domain by using Fourier transform to find a single periodic part; and then, the single periodic pattern of the texture can be positioned by converting the frequency domain into the space domain through the inverse Fourier transform. The texture of the high-end cloth is usually obtained by splicing the patterns in the single period after translation, and the distribution condition of the global nested texture is analyzed.
The specific process is as follows:
1) And obtaining a spectrogram through two-dimensional discrete Fourier transform based on the image obtained through preprocessing. The texture boundary gray scale in the high-end cloth image changes violently and is represented as a high-frequency signal; the gray level of the pure color background without texture in the image is almost unchanged and is expressed as a low-frequency signal; the periodicity of the frequency domain represents the periodicity of the distribution of the globally nested textures of the cloth. The smallest periodic part is found in the spectrogram and is localized by inverse fourier transformation to the single periodic pattern of the geometrically nested texture.
2) Obtaining the minimum bounding rectangle of a single periodic pattern
Figure DEST_PATH_IMAGE014
Wherein->
Figure DEST_PATH_IMAGE016
Respectively, represents the length and width of the rectangle>
Figure DEST_PATH_IMAGE018
Representing the coordinates of the geometric center of the circumscribed rectangle. The global texture image of the high-end cloth is obtained by translation and splicing of the rectangles, the coordinates of the geometric center can be changed, but the relative positions of the pixels and the geometric center in each rectangular area can not be changed. For a rectangular area>
Figure 691823DEST_PATH_IMAGE014
Make a first->
Figure DEST_PATH_IMAGE020
Secondary translation to obtain new rectangular region
Figure DEST_PATH_IMAGE022
。/>
Figure DEST_PATH_IMAGE024
Represents a fifth or fifth party>
Figure 368923DEST_PATH_IMAGE020
The horizontal coordinate of the geometric center of the rectangular area changes during secondary translation,
Figure DEST_PATH_IMAGE026
represents a fifth or fifth party>
Figure 511192DEST_PATH_IMAGE020
The ordinate of the geometric center of the rectangular area changes during secondary translation. The global distribution rule of the texture is determined by the translation of the monocycle pattern.
In the step b), only rectangles need to be subjected to the step b) after the global distribution rule of the texture is obtained
Figure 677862DEST_PATH_IMAGE014
The pixels in the image are discussed, and then translation is carried out according to the global distribution rule, so that the discussion of all the pixels in the image can be completed. Complete pairing in combination with an edge detection result>
Figure 679316DEST_PATH_IMAGE014
Classifying the middle texture and the background pixels to obtain a second classification of the pixels in the whole image; and then, targeted underground sampling is carried out, so that the condition of massive loss of texture feature pixels can be avoided, the integrity of complex texture information of high-end cloth is effectively ensured, and the matching of key points is more accurately finished. Meanwhile, when texture information in the image is rich to a certain degree, in order to avoid mistaken deletion of strong invariance key points of the texture part, downsampling is stopped, and the calculated amount of subsequent key point positioning can be reduced due to the limited number of layers.
Specifically, the specific process of adaptively down-sampling the pixel weighting is as follows:
1) For rectangle
Figure DEST_PATH_IMAGE028
And (3) carrying out edge detection on the image of the area, wherein the edge detection can only obtain the outline of the texture because the texture of the cloth has a certain width. Marking the sequence number on the detected edge from outside to inside, and calculating the first edge and the second edgeThe distance between the two edges is recorded as->
Figure DEST_PATH_IMAGE030
(ii) a Calculating the distance between the third edge and the fourth edge, and recording as ^ 4>
Figure DEST_PATH_IMAGE032
. Taking the pixels in the two red areas as texture pixels and marking the pixels as 1; the remaining pixels are considered background pixels, labeled 0. Completes the pairing->
Figure 409375DEST_PATH_IMAGE028
Classification of the inner pixels.
2) Based on
Figure 688040DEST_PATH_IMAGE028
The classification of the middle pixel, from the relative position invariance of the pixel to the geometric center, can be found out after the translation->
Figure DEST_PATH_IMAGE034
And performing secondary classification on the inner pixels so as to finish primary secondary classification on the texture pixels and the background pixels in the global image.
3) Part of pixels are required to be deleted every downsampling, even-numbered rows and even-numbered columns are directly deleted by the general technology, and therefore, texture key points are easily lost in a high-end cloth texture detection scene, and matching of the key points and detection of cloth qualified rate are affected. In order to retain the texels to a greater extent, the specific steps of adaptive down-sampling after weighting the pixels with different importance based on the preliminary two-classification of the pixels are as follows:
(1) And determining the mapping relation between the pixel point in the next sampling layer and the pixel point in the previous sampling layer. If the area of the image after each down-sampling becomes one fourth of the original area, it means that four pixels of the previous sampling layer get one pixel of the next sampling layer.
(2) In the first sampling layer, for each 2 × 2 region, a corresponding weight is given by four pixels to the importance degree of texture detection. Each region has both texture and background pixels. More texture key points are expected to appear, the texture pixel points are given with larger weight, and the weight ratio can be set to be 1:2, one texture pixel is equivalent to two background pixels.
(3) The weight of a pixel point not only depends on the type of the pixel point, but also needs to refer to the type of a neighborhood pixel. The 2 x 2 region was enlarged one turn outward to give 4 x 4 regions: if no newly added texture pixel point appears, it is indicated that the texture pixel in the original region is in an isolated or marginal position, which is easy to lose a large amount or disappear after multiple times of sampling, and needs to be given a larger weight. The weight of the eligible texture pixel becomes twice the initial weight.
(4) Based on the weights of the pixel points, traversing all 2 x 2 areas during first down-sampling, and keeping the newly sampled layer with the largest weight among the four pixel points. After one down-sampling is completed, the pixel weights are updated according to the same principle to prepare for the next sampling.
(5) Each down-sampling is performed by the same method, which ensures that the loss of the texture information of the cloth is minimum. Texture pixel points are reserved to a greater extent, so that more key points are positioned in the texture part, and the detection of the cloth texture by matching the key points is facilitated.
(6) In order to reduce the amount of redundant computation, the downsampling of the gaussian pyramid needs to have a termination condition. According to the above rule, the texels are retained to a greater extent and the background pixels are deleted. When most of the remaining texture information in the cloth image is texture information, further down-sampling may miss the key points of strong invariance of the texture part, and at this time, the down-sampling needs to be stopped. By calculating richness of texture information
Figure 884667DEST_PATH_IMAGE004
The process of downsampling is judged, and the calculation formula is shown as follows. Setting a threshold value of the richness of the texture information to->
Figure 232471DEST_PATH_IMAGE002
When the texture information is rich>
Figure 703904DEST_PATH_IMAGE004
Is higher than or equal to>
Figure 571497DEST_PATH_IMAGE002
In order to reserve more key points with strong invariance of the texture part, the down-sampling is stopped, the adaptive down-sampling Gaussian pyramid construction is completed, and the proposal of ^ based on the adaptive down-sampling>
Figure 735762DEST_PATH_IMAGE002
The value is 0.85.
Figure DEST_PATH_IMAGE006A
In the formula
Figure 774125DEST_PATH_IMAGE008
Representing the total number of lines of pixels in the image, based on the value of the pixel value>
Figure 783669DEST_PATH_IMAGE010
Representing the total number of columns of pixels, the denominator the total number of pixels, and the numerator the total number of texels with label 1. The greater the proportion of the number of texels to the total number of pixels in a picture, the greater the richness of the texture->
Figure 771348DEST_PATH_IMAGE004
The larger, the closer to the termination of downsampling. And at this moment, classifying the pixels by combining the global distribution rule, and finishing the process of constructing the self-adaptive down-sampling pyramid.
And 3, matching key points based on the Gaussian pyramid, and obtaining the qualification rate of the cloth according to the quantity of parallel lines.
In this embodiment, the pass percent of the high-end cloth texture is detected based on the SIFT algorithm, which is specifically as follows:
and carrying out difference and extremum detection based on the self-adaptive down-sampling pyramid. And performing curve fitting on the scale space Gaussian difference function on the basis of the discrete extreme points, calculating the real extreme points, and realizing the accurate positioning of the key points.
And counting gradient information of pixels in the neighborhood of the key point, determining a main direction and rotating a coordinate axis in order to ensure the rotation invariance of the feature vector. And after rotation, taking the window, performing weighting operation by using a Gaussian window, and drawing gradient histograms in 8 directions on small blocks of the window to form seed points. Each feature is composed of 16 seed points, each seed point has vector information in 8 directions, and a 128-dimensional feature descriptor of the key point is generated.
And obtaining template textures and a key point descriptor set of the texture image to be detected, calculating the most matched key point of the current key point descriptor in another key point descriptor set by using the Euclidean distance for the key point of each template texture, and connecting the two successfully matched points. The qualification rate of high-grade cloth depends on the quantity of parallel lines in the connecting line, and the more parallel lines, the more the produced cloth meets the standard of the template, and the quality is higher.
The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (2)

1. A cloth texture detection method is characterized by comprising the following steps:
acquiring nested texture images of the cloth, wherein the nested texture images comprise a plurality of production images to be detected, aligning the production images to be detected and preprocessing the production images to obtain gray level images;
acquiring global distribution characteristics of the gray level image, and constructing a self-adaptive downsampling Gaussian pyramid;
matching of key points is completed based on a Gaussian pyramid, and the qualification rate of the cloth is obtained according to the quantity of parallel lines;
the specific process of constructing the self-adaptive downsampling Gaussian pyramid comprises the following steps:
performing two-dimensional discrete Fourier transform on the gray level image to obtain a spectrogram, finding a minimum period part in the spectrogram to obtain a single-period pattern, obtaining a minimum external rectangle of the single-period pattern, and translating the minimum external rectangle to obtain a new rectangular region;
performing edge detection on the gray level image corresponding to the minimum circumscribed rectangle to obtain a texture contour, calculating the distance between any two edges, and classifying pixel points to obtain background pixels and texture pixels; further obtaining a preliminary second classification of the new rectangular area;
based on the preliminary secondary classification of the new rectangular region, different pixels are weighted and then are subjected to adaptive downsampling to obtain a Gaussian pyramid;
the specific process of obtaining the Gaussian pyramid by self-adaptive down-sampling after weighting different pixels is as follows:
determining the mapping relation between the pixel point in the next sampling layer and the pixel point in the previous sampling layer; in the first sampling layer, for each 2 x 2 region, four pixels are given corresponding weights; the region of 2 x 2 was enlarged one turn outward to give a region of 4 x 4: if no newly added texture pixel point appears, the texture pixel in the original region is in an isolated or marginal position, and the given weight is twice of the initial weight; based on the weights of the pixel points, traversing all 2 x 2 areas during the first down-sampling, and reserving the newly sampled layer with the maximum weight among the four pixel points; after one-time down-sampling is completed, updating the pixel weight according to the same principle to prepare for next-time sampling; stopping down-sampling when all the cloth images are texture information;
carrying out difference and extremum detection based on the self-adaptive down-sampling pyramid;
counting gradient information of pixels in a neighborhood of the key points;
obtaining template textures and a key point descriptor set of a texture image to be detected, calculating the most matched key point of the current key point descriptor in another key point descriptor set by using the Euclidean distance for the key point of each template texture, and connecting two points which are successfully matched; and calculating the qualified rate of the cloth according to the number of parallel lines in the connecting line.
2. The method as claimed in claim 1, wherein the threshold of the richness of the texture information is set as
Figure DEST_PATH_IMAGE002
When the texture information richness->
Figure DEST_PATH_IMAGE004
Is higher than or equal to>
Figure 647030DEST_PATH_IMAGE002
Stopping down-sampling, and completing the construction of a Gaussian pyramid of the self-adaptive down-sampling; wherein the texture information richness is
Figure DEST_PATH_IMAGE006
In the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE008
representing the number of lines in a picture, which number corresponds to the number of lines in the picture>
Figure DEST_PATH_IMAGE010
Representing the total number of columns of pixels, the denominator representing the total number of pixels, and->
Figure DEST_PATH_IMAGE012
The total number of textures of the m-th line. />
CN202211249247.8A 2022-10-12 2022-10-12 Cloth texture detection method Active CN115330771B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211249247.8A CN115330771B (en) 2022-10-12 2022-10-12 Cloth texture detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211249247.8A CN115330771B (en) 2022-10-12 2022-10-12 Cloth texture detection method

Publications (2)

Publication Number Publication Date
CN115330771A CN115330771A (en) 2022-11-11
CN115330771B true CN115330771B (en) 2023-04-14

Family

ID=83913632

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211249247.8A Active CN115330771B (en) 2022-10-12 2022-10-12 Cloth texture detection method

Country Status (1)

Country Link
CN (1) CN115330771B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729654A (en) * 2014-01-22 2014-04-16 青岛新比特电子科技有限公司 Image matching retrieval system on account of improving Scale Invariant Feature Transform (SIFT) algorithm

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11847184B2 (en) * 2020-01-14 2023-12-19 Texas Instruments Incorporated Two-way descriptor matching on deep learning accelerator
CN115115637B (en) * 2022-08-30 2022-12-06 南通市昊逸阁纺织品有限公司 Cloth defect detection method based on image pyramid thought

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729654A (en) * 2014-01-22 2014-04-16 青岛新比特电子科技有限公司 Image matching retrieval system on account of improving Scale Invariant Feature Transform (SIFT) algorithm

Also Published As

Publication number Publication date
CN115330771A (en) 2022-11-11

Similar Documents

Publication Publication Date Title
CN110473231B (en) Target tracking method of twin full convolution network with prejudging type learning updating strategy
CN111626993A (en) Image automatic detection counting method and system based on embedded FEFnet network
CN110992263B (en) Image stitching method and system
CN104217459B (en) A kind of spheroid character extracting method
WO2022179002A1 (en) Image matching method and apparatus, electronic device, and storage medium
CN106127258B (en) A kind of target matching method
CN112163990B (en) Significance prediction method and system for 360-degree image
CN109903379A (en) A kind of three-dimensional rebuilding method based on spots cloud optimization sampling
CN111199245A (en) Rape pest identification method
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images
CN115471682A (en) Image matching method based on SIFT fusion ResNet50
CN113409332B (en) Building plane segmentation method based on three-dimensional point cloud
CN113706591B (en) Point cloud-based three-dimensional reconstruction method for surface weak texture satellite
CN106934395B (en) Rigid body target tracking method adopting combination of SURF (speeded Up robust features) and color features
CN113159103B (en) Image matching method, device, electronic equipment and storage medium
CN115330771B (en) Cloth texture detection method
CN108447038A (en) A kind of mesh denoising method based on non local full variation operator
CN111951162A (en) Image splicing method based on improved SURF algorithm
CN112991395B (en) Vision tracking method based on foreground condition probability optimization scale and angle
CN114964206A (en) Monocular vision odometer target pose detection method
Dai et al. An Improved ORB Feature Extraction Algorithm Based on Enhanced Image and Truncated Adaptive Threshold
CN114943891A (en) Prediction frame matching method based on feature descriptors
CN111754402A (en) Image splicing method based on improved SURF algorithm
CN115393355B (en) Nut internal thread detection method with self-adaptive scale space
CN115717887B (en) Star point rapid extraction method based on gray distribution histogram

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant