CN118411369A - Sewing stitch defect detection method and system based on machine vision - Google Patents

Sewing stitch defect detection method and system based on machine vision Download PDF

Info

Publication number
CN118411369A
CN118411369A CN202410889988.5A CN202410889988A CN118411369A CN 118411369 A CN118411369 A CN 118411369A CN 202410889988 A CN202410889988 A CN 202410889988A CN 118411369 A CN118411369 A CN 118411369A
Authority
CN
China
Prior art keywords
image
network
feature
student
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410889988.5A
Other languages
Chinese (zh)
Inventor
刘冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Orange Weaving Data Technology Co ltd
Original Assignee
Hangzhou Orange Weaving Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Orange Weaving Data Technology Co ltd filed Critical Hangzhou Orange Weaving Data Technology Co ltd
Priority to CN202410889988.5A priority Critical patent/CN118411369A/en
Publication of CN118411369A publication Critical patent/CN118411369A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a sewing stitch defect detection method and system based on machine vision, wherein the method comprises the following steps: collecting a sewing image, generating a corresponding motion blur image based on a random track, extracting image features through a feature extraction module, carrying out co-scale feature fusion on the image features based on an AFF module, determining image loss after feature fusion, and training to obtain a clear image; obtaining a standard stitch image, determining student characteristics corresponding to the stitch image, and a pre-training teacher network; acquiring a first feature map of a teacher network and a second feature of a student network, mutually mapping, determining an abnormal region and corresponding abnormal scores based on the mapping result, and training the student network by taking the abnormal scores as training basis; and inputting the clear images into the trained student network and teacher network, and judging whether the sewing images have trace defects or not.

Description

Sewing stitch defect detection method and system based on machine vision
Technical Field
The invention relates to the technical field of sewing detection, in particular to a sewing stitch defect detection method and system based on machine vision.
Background
In the production of garments, sewing is an important process step in the production of garments, and the quality of the final garment product depends on the sewing process, the sewing conditions and the technique of worker sewing. However, in the sewing process, the problem of sewing stitch is unavoidable, and common problems of sewing stitch include jumper wire, broken thread, throwing thread, sewing overtightening and the like, so that the problem of sewing stitch is found in time and an important target in the clothing production process is corrected.
However, in the existing clothing production line, the existing clothing production line basically stays in manual operation of workers, and the manual detection mode of detecting sewing stitches by naked eyes is adopted, so that the labor cost and the working strength are increased, and meanwhile, the subjective judgment deviation and the false detection caused by visual fatigue are easy to occur.
Disclosure of Invention
Aiming at the problems in the prior art, the embodiment of the invention provides a sewing stitch defect detection method and system based on machine vision.
The embodiment of the invention provides a sewing stitch defect detection method based on machine vision, which comprises the following steps:
Collecting a sewing image, generating a corresponding motion blur image based on a random track, and detecting whether the motion blur image is a low-fraction blur image or not;
When the motion blurred image is a low-fraction blurred image, extracting image features through a feature extraction module, carrying out co-scale feature fusion on the image features based on an AFF module, determining image loss after feature fusion, and taking a loss function as a training basis to obtain a trained clear image;
Acquiring a standard stitch image, determining student characteristics corresponding to the stitch image, constructing an untrained student network based on the student characteristics, and selecting a pre-trained teacher network associated with the student characteristics;
inputting the standard trace image into a teacher network and a student network, acquiring a first feature map of the teacher network and a second feature of the student network, mutually mapping, determining an abnormal region and corresponding abnormal scores based on the mapping result, and training the student network by taking the abnormal scores as training basis;
And inputting the clear images into a trained student network and a trained teacher network to obtain output characteristic difference scores, comparing score thresholds, and judging whether the sewing images have linear defects or not.
In one embodiment, the method further comprises:
Defining a state space of a Markov process, and constructing a corresponding transition probability matrix;
Setting an initial state, generating corresponding track points by combining the transition probability matrix, and inserting additional track points by combining a sub-pixel difference technology when the track points are generated;
and carrying out fuzzy adjustment on pixel values of corresponding positions in the sewing image based on the distribution of the track points, and generating a corresponding motion fuzzy image.
In one embodiment, the method further comprises:
And integrating the feature pyramid network, extracting image features on multiple scales through a feature extraction module, up-sampling or down-sampling the image features through an AFF module, and putting the image features on the multiple scales into the same scale for feature fusion.
In one embodiment, the method further comprises:
Detecting the corresponding image loss after feature fusion, wherein a loss function formula of the image loss comprises the following steps:
Where L P is pixel space penalty, L X is content penalty, and L adv is intra-global and local arbiter penalty.
In one embodiment, the method further comprises:
integrating a feature pyramid network, collecting first features of a teacher network and second features of a student network at different levels through a feature extraction module, and normalizing the first features and the second features;
subtracting the first feature mapping and the second feature mapping of the corresponding level, and up-sampling to the same scale to determine the anomaly score of the same level;
and adding the abnormal scores of different levels to obtain pixel score, and selecting the maximum abnormal score in the pixel score.
In one embodiment, the method further comprises:
Wherein, As a first feature of the method, a first feature,As a second feature of the present invention,In order to input an image of the subject,Extracting the (i, j) th element of the L-th layer feature map from the standard stitch image for the teacher network t,The norm represents one way to measure the length of the vector,The (I, j) th element of the layer I feature map is extracted from the standard stitch image for the student network s.
In one embodiment, the student features include:
Texture features, edge features, color features.
The embodiment of the invention provides a sewing stitch defect detection system based on machine vision, which comprises the following steps:
The motion blurring module is used for collecting a sewing image, generating a corresponding motion blurring image based on a random track, and detecting whether the motion blurring image is a low-fraction blurring image or not;
The deblurring module is used for extracting image features through the feature extraction module when the motion blurred image is a low-fraction blurred image, carrying out co-scale feature fusion on the image features based on the AFF module, determining image loss after feature fusion, and obtaining a trained clear image by taking a loss function as a training basis;
The teacher-student network module is used for acquiring standard stitch images, determining student characteristics corresponding to the stitch images, constructing untrained student networks based on the student characteristics, and selecting a pre-trained teacher network associated with the student characteristics;
The student network training module is used for inputting the standard trace image into a teacher network and a student network, acquiring a first feature map of the teacher network and a second feature of the student network, mutually mapping, determining an abnormal area and corresponding abnormal scores based on the mapping result, and training the student network by taking the abnormal scores as training basis;
And the detection module is used for inputting the clear images into the trained student network and teacher network to obtain the output characteristic difference scores, comparing the score threshold values and judging whether the sewing images have the trace defects or not.
The embodiment of the invention provides electronic equipment, which comprises a processor and a memory;
the processor is connected with the memory;
The memory is used for storing executable program codes;
The processor runs a program corresponding to executable program code stored in the memory by reading the executable program code for performing the method of one or more embodiments.
Embodiments of the present invention provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the machine vision-based sewing stitch defect detection method described above.
In view of the above, in one or more embodiments of the present specification, a sewing image is acquired, a corresponding motion blur image is generated based on a random trajectory, and whether the motion blur image is a low score blur image is detected; when the motion blurred image is a low-fraction blurred image, extracting image features by a feature extraction module, carrying out co-scale feature fusion on the image features based on an AFF module, determining image loss after feature fusion, and taking a loss function as a training basis to obtain a trained clear image; acquiring a standard stitch image, determining student characteristics corresponding to the stitch image, constructing an untrained student network based on the student characteristics, and selecting a pre-trained teacher network associated with the student characteristics; inputting standard stitch images into a teacher network and a student network, acquiring a first feature map of the teacher network and a second feature of the student network, mutually mapping, determining an abnormal region and corresponding abnormal scores based on the mapping result, and training the student network by taking the abnormal scores as training basis; and inputting the clear images into the trained student network and teacher network to obtain the output characteristic difference scores, comparing the score threshold values, and judging whether the sewing images have trace defects or not. Therefore, the method can finish the rapid detection of the alignment through the optimization algorithm and the network structure, saves human resources, improves the accuracy and the efficiency of the detection result, and simultaneously can solve the problem of image blurring through the deblurring module.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a machine vision-based sewing stitch defect detection method according to an embodiment of the present disclosure.
FIG. 2 is a block diagram of a deblurring algorithm provided in one embodiment of the present disclosure.
Fig. 3 is a flow chart of a teacher-student network model training provided in one embodiment of the present description.
Fig. 4 is a frame diagram of a teacher-student network model provided in one embodiment of the present description.
Fig. 5 is a schematic structural diagram of a sewing stitch defect detecting system based on machine vision according to an embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The subject matter described herein will now be discussed with reference to example embodiments. It should be appreciated that these embodiments are discussed only to enable a person skilled in the art to better understand and thereby practice the subject matter described herein, and are not limiting of the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure as set forth in the specification. Various examples may omit, replace, or add various procedures or components as desired. For example, the described methods may be performed in a different order than described, and various steps may be added, omitted, or combined. In addition, features described with respect to some examples may be combined in other examples as well.
As used herein, the term "comprising" and variations thereof mean open-ended terms, meaning "including, but not limited to. The term "based on" means "based at least in part on". The terms "one embodiment" and "an embodiment" mean "at least one embodiment. The term "another embodiment" means "at least one other embodiment". The terms "first," "second," and the like, may refer to different or the same object. Other definitions, whether explicit or implicit, may be included below. Unless the context clearly indicates otherwise, the definition of a term is consistent throughout this specification.
As shown in fig. 1, an embodiment of the present invention provides a machine vision-based sewing stitch defect detection method, including:
step S101, collecting a sewing image, generating a corresponding motion blur image based on a random track, and detecting whether the motion blur image is a low-fraction blur image.
Specifically, a cloth sewing image requiring sewing stitch detection is collected, a corresponding motion blurred image is generated for a target cloth sewing image, the motion blurred image generating step can generate a blurred image by using a random track-based blurred data generating method, a series of track points are generated through a Markov process, a series of track points are generated between the track points through a sub-pixel interpolation technology, and then the generated random track is applied to an original sewing image to simulate a motion blur effect. The specific steps can be divided into: define the markov process: firstly defining a state space of a Markov process, wherein each state represents a possible pixel offset direction or intensity change, then constructing a probability matrix, and constructing a transition probability matrix, wherein the probability matrix describes the probability of transition from one state to another state, and ensuring that the matrix meets Markov properties; Generating a random track: firstly, selecting an initial state, usually a random point or a central point in an image, randomly selecting a next state according to a transition probability matrix, updating the positions of track points according to the next state, and repeating the steps to generate a series of track points; sub-pixel interpolation: inserting additional points between the generated track points by using a sub-pixel interpolation technology (such as bilinear interpolation, bicubic interpolation and the like) so as to obtain smoother and finer blurring effects; Application blurring: for each pixel in the sewing image, the corresponding position or the influence area of the pixel on the track is determined according to the generated random track, because the track is randomly generated, some pixels may directly fall on the track, other pixels may need to perform interpolation processing according to the nearest track point, for each pixel, if the pixel is directly located on the track, the blurring processing is directly performed according to the intensity of the track point where the pixel is located, if the pixel is not located on the track, the blurred color value of the pixel needs to be estimated according to the distance between the pixel and the nearest track point by adopting interpolation technology (such as bilinear interpolation, nearest neighbor interpolation and the like), in addition, each pixel can be assigned with a weight, the weight reflects its relative position on the track and the density of its surrounding track points. All the track points affecting the pixel are weighted and averaged according to the weights to determine the final blurred pixel value, and a new pixel value is obtained for each pixel in the image after the resampling and weighted averaging processes are completed, the color of the pixel after the blurring process is represented, and the new values of all the pixels are combined to form the whole blurred image. After the motion blurred image is generated, detecting whether the motion blurred image is a low-fraction blurred image, namely detecting the definition fraction of the motion blurred image, judging whether the image is clear or blurred through the definition fraction, and performing subsequent deblurring processing on the low-fraction blurred image, wherein the judgment of the blurring degree/definition of the motion blurred image can be realized by scoring the edge information of the image through a Laplace variance method so as to judge the blurring degree of the image, namely converting the image from an RGB color space into a gray image, convoluting the gray image through a Laplace operator, and then obtaining the image after the Laplace operator is applied to be regarded as an edge intensity image, The variance is calculated for all pixel values of the edge intensity map. The large variance indicates large edge intensity variation in the image, meaning that the image contains rich details and higher definition; conversely, if the variance is small, it indicates that the image is flat and possibly blurred.
Step S102, when the motion blurred image is a low-fraction blurred image, extracting image features through a feature extraction module, carrying out co-scale feature fusion on the image features based on an AFF module, and obtaining a trained clear image by taking a loss function corresponding to image loss after feature fusion as a training basis.
Specifically, when it is determined that the motion blurred image is a low-score blurred image, a deblurring operation is required for the low-score blurred image, and the deblurring operation may include image feature extraction, image feature fusion (AFF (asymmetric feature fusion, ASYMMETRIC FEATURE FUSION) based module), and countermeasure training section, specifically summarized as:
For deblurring operation, the purpose of extracting image features on multiple scales of a low-fraction blurred image can be achieved by DeblurGAN combining with a Feature Pyramid (FPN), and in order to improve detection accuracy, after feature extraction, detection accuracy is improved through a layer of Asymmetric Feature Fusion (AFF) module, and the AFF module can be connected with features of different scales through up-sampling and down-sampling. The feature fusion module of this layer carries out new feature fusion on the first group of multi-scale features extracted by the feature extraction module in the background network, carries out feature fusion before scaling images of different scales to the same scale, and can specifically carry out new feature fusion on 4 groups of multi-scale features, for example, as shown in fig. 2, the feature fusion module comprises features which are extracted by the feature extraction module and respectively have a difference of 2 times in size, then carries out feature fusion, an up-sampling module and a convolution module, finally carries out loss function design through a discriminator, and a selection discriminator of the discriminator adopts a double-scale discriminator which can capture details of images more comprehensively, thereby improving discrimination capability of the model and image reconstruction quality. The loss function formula includes:
Where L P is pixel space loss, L X is content loss, and L adv is intra-global and local discriminant loss, in addition, orthogonal constraints can be added to reduce correlation between parameters during training through loss functions. Regularized losses are added to the original loss function against losses and content losses. To prevent model overfitting, the following formula is added:
Wherein, the weights in the deblurring model can be collectively expressed as a matrix w= (wn, n), the weights corresponding to the model are expressed through matrix correspondence, alpha (alpha) is a super-parameter of regularization intensity, and w×wt is the product of the weight matrix and its transpose matrix. I is the identity matrix. The F represents the Frobenius norm for measuring the size of the matrix.
Step S103, a standard stitch image is acquired, student characteristics corresponding to the stitch image are determined, an untrained student network is built based on the student characteristics, and a pre-trained teacher network associated with the student characteristics is selected.
Specifically, after deblurring a motion blurred image, performing stitching defect detection on a clear image, and implementing rapid detection on a surface flaw section of a stitched fabric through a student-teacher feature pyramid matching method (ST-FPN) of unsupervised anomaly detection, firstly, training a student-teacher feature pyramid, then determining a training object of a student network, namely a high-resolution feature in the stitching image as a student (network) feature, such as texture, edge, color and the like of the image, and then selecting a pre-trained teacher network associated with the student feature, such as an image-related teacher network, wherein parameters of the teacher network can be from corresponding values of ResNet-34 pre-trained on an ImageNet, namely a model of the teacher network is pre-trained on a large-scale ImageNet dataset, the capability of extracting rich features from the image is learned, and the parameters of the student network are randomly initialized.
Step S104, inputting the standard stitch image into a teacher network and a student network, mapping a first characteristic of the teacher network to a second characteristic of the student network, determining an abnormal area and a corresponding abnormal score based on the mapping result, and training the student network by taking the abnormal score as a training basis.
Specifically, in a teacher-student network framework, a Feature Pyramid (FPN) may be combined, a pyramid feature extractor is used to collect first features of a teacher layer and second features of a student layer under different scales, then after normalization processing is performed on the first features and the second features, the features are mapped to each other, after the features mapped to each other are up-sampled to the same scale, the corresponding feature anomalies after mapping are determined, so as to determine anomaly scores, and a general process for training a student network through the anomaly scores may include:
s301, selecting a teacher model and constructing a student model;
A pre-trained deep network, which is excellent in terms of detection purposes, such as image detection tasks, is selected as a teacher model, and these models generally have strong feature expression capabilities, such as the ResNet, denseNet or EFFICIENTNET series, and then a student model with a simpler structure but similar architecture to the teacher model is designed, specifically, high-resolution features are used as model features, and the number of layers and channels are kept reduced so as to adapt to the environment with limited resources.
S302, constructing a feature pyramid based on the teacher model and the student model;
For both networks, a Feature Pyramid Network (FPN) structure may be used to fuse the layers of features through upsampling and downsampling operations to form a multi-scale feature map. This ensures that the features of each scale are available.
S303, collecting first characteristics of a teacher network and second characteristics of a student network based on the characteristic pyramid, and mapping the first characteristics of the teacher network to the second characteristics of the student network;
For example, when the teacher model is ResNet, the first three feature collection modules (i.e. conv2x, conv3x, conv4 x) of ResNet may be selected as pyramid feature extractors, collect corresponding features, and collect feature maps of the teacher layer and the student layer under different scales, and then normalize the feature maps by a formula (L2, so that the length of the feature maps in the euclidean space is 1, where a calculation formula for normalizing the first feature and the second feature includes:
Wherein, As a first feature of the method, a first feature,As a second feature of the present invention,In order to input an image of the subject,Extracting the (i, j) th element of the L-th layer feature map from the standard stitch image for the teacher network t,The norm represents one way to measure the length of the vector,And (3) extracting (I, j) th elements of the I-th layer feature map from the standard stitch image for the student network S, and correspondingly subtracting the feature maps of three layers in the middle of the two networks S and T in the test process.
S304, determining an abnormal region and a corresponding abnormal score based on the mapping result, and training the student network by taking the abnormal score as a training basis;
When mapping features and subtracting the features correspondingly, the corresponding features need to be up-sampled to the same size, because the corresponding sizes of the features collected by different levels (such as features extracted by conv2x, conv3x and conv4x in the previous steps) are different, the abnormal scores in the mapping result are determined after up-sampling to the same size, the abnormal scores of the levels in the up-sampling result can be determined and added, the abnormal scores at each pixel point of the picture can be obtained, the maximum value is selected as the abnormal score of the picture, and the specific abnormal score formula can be:
Wherein, Representing the anomaly score for image J. And adding the anomaly scores of all layers to obtain a final score at the pixel point. And feeding back according to the final score, training the network weight in the student network by taking the abnormal score as small as possible as an iteration target, and determining the student network obtained by final training.
Step S105, inputting the clear images into a trained student network and a teacher network to obtain output characteristic difference scores, comparing score thresholds, and judging whether the sewing images have linear defects or not.
Specifically, the deblurred clear images are input into a trained student network and a trained teacher network, the output characteristic difference scores can be obtained and compared with a preset score threshold, wherein the score threshold can be set to be 0 or a small value larger than 0, and when the characteristic difference scores are larger than the score threshold, the fact that the sewing images have linear defects is indicated, and reprocessing is needed.
In addition, an image in which the student network is trained through the teacher network and the image detection is performed through the teacher network and the student network may be as shown in fig. 4, and in fig. 4, a training process of the student network and a detection process of a test image (deblurred image) are included.
The embodiment of the invention provides a sewing stitch defect detection method based on machine vision, which is characterized by collecting sewing images, generating corresponding motion blurred images based on random tracks, and detecting whether the motion blurred images are low-score blurred images or not; when the motion blurred image is a low-fraction blurred image, extracting image features by a feature extraction module, carrying out co-scale feature fusion on the image features based on an AFF module, determining image loss after feature fusion, and taking a loss function as a training basis to obtain a trained clear image; acquiring a standard stitch image, determining student characteristics corresponding to the stitch image, constructing an untrained student network based on the student characteristics, and selecting a pre-trained teacher network associated with the student characteristics; inputting standard stitch images into a teacher network and a student network, acquiring a first feature map of the teacher network and a second feature of the student network, mutually mapping, determining an abnormal region and corresponding abnormal scores based on the mapping result, and training the student network by taking the abnormal scores as training basis; and inputting the clear images into the trained student network and teacher network to obtain the output characteristic difference scores, comparing the score threshold values, and judging whether the sewing images have trace defects or not. Therefore, the method can finish the rapid detection of the alignment through the optimization algorithm and the network structure, saves human resources, improves the accuracy and the efficiency of the detection result, and simultaneously can solve the problem of image blurring through the deblurring module.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a sewing stitch defect detecting system based on machine vision according to an embodiment of the present application. As shown in fig. 5, the system includes:
The motion blurring module S501 is used for collecting a sewing image, generating a corresponding motion blurring image based on a random track, and detecting whether the motion blurring image is a low-fraction blurring image or not;
The deblurring module S502 is used for extracting image features through the feature extraction module when the motion blurred image is a low-fraction blurred image, carrying out co-scale feature fusion on the image features based on the AFF module, determining image loss after feature fusion, and obtaining a trained clear image by taking a loss function as a training basis;
A teacher-student network module S503, configured to acquire a standard stitch image, determine a student feature corresponding to the stitch image, construct an untrained student network based on the student feature, and select a pre-trained teacher network associated with the student feature;
the student network training module S504 is configured to input the standard trace image into a teacher network and a student network, obtain a first feature map of the teacher network and a second feature of the student network, map each other, determine an abnormal region and a corresponding abnormal score based on a mapping result, and train the student network by using the abnormal score as a training basis;
the detection module S505 is configured to input the clear image into the trained student network and teacher network, obtain an output feature difference score, compare a score threshold, and determine whether the stitching image has a stitch defect.
It will be clear to those skilled in the art that the technical solutions of the embodiments of the present application may be implemented by means of software and/or hardware. "unit" and "module" in this specification refer to software and/or hardware capable of performing a particular function, either alone or in combination with other components, such as Field-Programmable gate arrays (Field-Programmable GATE ARRAY, FPGA), integrated circuits (INTEGRATED CIRCUIT, ICs), and the like.
The processing units and/or modules of the embodiments of the present application may be implemented by an analog circuit that implements the functions described in the embodiments of the present application, or may be implemented by software that executes the functions described in the embodiments of the present application.
Referring to fig. 6, a schematic structural diagram of an electronic device according to an embodiment of the present application is shown, where the electronic device may be used to implement the method in the embodiment shown in fig. 1. As shown in fig. 3, the electronic device 600 may include: at least one processor 601, at least one network interface 604, a user interface 603, a memory 605, at least one communication bus 602.
Wherein the communication bus 602 is used to enable connected communications between these components.
The user interface 603 may include a Display screen (Display), a Camera (Camera), and the optional user interface 603 may further include a standard wired interface, a wireless interface.
The network interface 604 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 601 may include one or more processing cores. The processor 601 connects various parts within the overall electronic device 600 using various interfaces and lines, performs various functions of the terminal 600 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 605, and invoking data stored in the memory 605. Alternatively, the processor 601 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 601 may integrate one or a combination of several of a processor (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 601 and may be implemented by a single chip.
The Memory 605 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 605 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 605 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 605 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, etc.; the storage data area may store data or the like referred to in the above respective method embodiments. The memory 605 may also optionally be at least one storage device located remotely from the processor 601. As shown in fig. 3, an operating system, network communication modules, user interface modules, and program instructions may be included in memory 605, which is a type of computer storage medium.
In the electronic device 600 shown in fig. 3, the user interface 603 is mainly used for providing an input interface for a user, and acquiring data input by the user; and the processor 601 may be configured to invoke the image-based interactive application stored in the memory 605 and to specifically perform the following operations: collecting a sewing image, generating a corresponding motion blur image based on a random track, and detecting whether the motion blur image is a low-fraction blur image or not; when the motion blurred image is a low-fraction blurred image, extracting image features by a feature extraction module, carrying out co-scale feature fusion on the image features based on an AFF module, determining image loss after feature fusion, and taking a loss function as a training basis to obtain a trained clear image; acquiring a standard stitch image, determining student characteristics corresponding to the stitch image, constructing an untrained student network based on the student characteristics, and selecting a pre-trained teacher network associated with the student characteristics; inputting standard stitch images into a teacher network and a student network, acquiring a first feature map of the teacher network and a second feature of the student network, mutually mapping, determining an abnormal region and corresponding abnormal scores based on the mapping result, and training the student network by taking the abnormal scores as training basis; and inputting the clear images into the trained student network and teacher network to obtain the output characteristic difference scores, comparing the score threshold values, and judging whether the sewing images have trace defects or not.
The present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method. The computer-readable storage medium may include, among other things, any type of disk including floppy disks, optical disks, DVDs, CD-ROMs, micro-drives, and magneto-optical disks, ROM, RAM, EPROM, EEPROM, DRAM, VRAM, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, such as the division of the units, merely a logical function division, and there may be additional manners of dividing the actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on this understanding, the technical solution of the present application may be embodied essentially or partly in the form of a software product, or all or part of the technical solution, which is stored in a memory, and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be performed by hardware associated with a program that is stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.

Claims (10)

1. The machine vision-based sewing stitch defect detection method is characterized by comprising the following steps of:
Collecting a sewing image, generating a corresponding motion blur image based on a random track, and detecting whether the motion blur image is a low-fraction blur image or not;
When the motion blurred image is a low-fraction blurred image, extracting image features through a feature extraction module, carrying out co-scale feature fusion on the image features based on an AFF module, determining image loss after feature fusion, and taking a loss function as a training basis to obtain a trained clear image;
Acquiring a standard stitch image, determining student characteristics corresponding to the stitch image, constructing an untrained student network based on the student characteristics, and selecting a pre-trained teacher network associated with the student characteristics;
inputting the standard trace image into a teacher network and a student network, acquiring a first feature map of the teacher network and a second feature of the student network, mutually mapping, determining an abnormal region and corresponding abnormal scores based on the mapping result, and training the student network by taking the abnormal scores as training basis;
And inputting the clear images into a trained student network and a trained teacher network to obtain output characteristic difference scores, comparing score thresholds, and judging whether the sewing images have linear defects or not.
2. The machine vision-based sewing stitch defect detection method as recited in claim 1, wherein the generating a corresponding motion blur image based on a random trajectory comprises:
Defining a state space of a Markov process, and constructing a corresponding transition probability matrix;
Setting an initial state, generating corresponding track points by combining the transition probability matrix, and inserting additional track points by combining a sub-pixel difference technology when the track points are generated;
and carrying out fuzzy adjustment on pixel values of corresponding positions in the sewing image based on the distribution of the track points, and generating a corresponding motion fuzzy image.
3. The machine vision-based stitching defect detection method of claim 1, wherein the extracting image features by the feature extraction module and the performing co-scale feature fusion on the image features by the AFF module comprises:
And integrating the feature pyramid network, extracting image features on multiple scales through a feature extraction module, up-sampling or down-sampling the image features through an AFF module, and putting the image features on the multiple scales into the same scale for feature fusion.
4. The machine vision-based sewing stitch defect detection method as claimed in claim 3, wherein the obtaining the trained clear image by using the loss function corresponding to the image loss after feature fusion as a training basis comprises:
Detecting the corresponding image loss after feature fusion, wherein a loss function formula of the image loss comprises the following steps:
Where L P is pixel space penalty, L X is content penalty, and L adv is intra-global and local arbiter penalty.
5. The machine vision-based sewing stitch defect detection method as claimed in claim 1, wherein the obtaining the first feature map of the teacher network and the second feature map of the student network, and mapping each other, determining the abnormal region and the corresponding abnormal score based on the mapping result, comprises:
integrating a feature pyramid network, collecting first features of a teacher network and second features of a student network at different levels through a feature extraction module, and normalizing the first features and the second features;
subtracting the first feature mapping and the second feature mapping of the corresponding level, and up-sampling to the same scale to determine the anomaly score of the same level;
and adding the abnormal scores of different levels to obtain pixel score, and selecting the maximum abnormal score in the pixel score.
6. The machine vision-based stitching defect detection method of claim 5 wherein the calculation formula for normalizing the first and second features comprises:
Wherein, As a first feature of the method, a first feature,As a second feature of the present invention,In order to input an image of the subject,Extracting the (i, j) th element of the L-th layer feature map from the standard stitch image for the teacher network t,The norm represents one way to measure the length of the vector,The (I, j) th element of the layer I feature map is extracted from the standard stitch image for the student network s.
7. The machine vision-based sewing stitch defect detection method of claim 1, wherein the student's characterization comprises:
Texture features, edge features, color features.
8. A machine vision-based sewing stitch defect detection system, the system comprising:
The motion blurring module is used for collecting a sewing image, generating a corresponding motion blurring image based on a random track, and detecting whether the motion blurring image is a low-fraction blurring image or not;
The deblurring module is used for extracting image features through the feature extraction module when the motion blurred image is a low-fraction blurred image, carrying out co-scale feature fusion on the image features based on the AFF module, determining image loss after feature fusion, and obtaining a trained clear image by taking a loss function as a training basis;
The teacher-student network module is used for acquiring standard stitch images, determining student characteristics corresponding to the stitch images, constructing untrained student networks based on the student characteristics, and selecting a pre-trained teacher network associated with the student characteristics;
The student network training module is used for inputting the standard trace image into a teacher network and a student network, acquiring a first feature map of the teacher network and a second feature of the student network, mutually mapping, determining an abnormal area and corresponding abnormal scores based on the mapping result, and training the student network by taking the abnormal scores as training basis;
And the detection module is used for inputting the clear images into the trained student network and teacher network to obtain the output characteristic difference scores, comparing the score threshold values and judging whether the sewing images have the trace defects or not.
9. An electronic device includes a processor and a memory;
the processor is connected with the memory;
The memory is used for storing executable program codes;
The processor runs a program corresponding to executable program code stored in the memory by reading the executable program code for performing the method according to any one of claims 1-7.
10. A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any of claims 1-7.
CN202410889988.5A 2024-07-04 2024-07-04 Sewing stitch defect detection method and system based on machine vision Pending CN118411369A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410889988.5A CN118411369A (en) 2024-07-04 2024-07-04 Sewing stitch defect detection method and system based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410889988.5A CN118411369A (en) 2024-07-04 2024-07-04 Sewing stitch defect detection method and system based on machine vision

Publications (1)

Publication Number Publication Date
CN118411369A true CN118411369A (en) 2024-07-30

Family

ID=92032728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410889988.5A Pending CN118411369A (en) 2024-07-04 2024-07-04 Sewing stitch defect detection method and system based on machine vision

Country Status (1)

Country Link
CN (1) CN118411369A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230368372A1 (en) * 2021-12-03 2023-11-16 Contemporary Amperex Technology Co., Limited Fast anomaly detection method and system based on contrastive representation distillation
CN117173131A (en) * 2023-09-05 2023-12-05 天津大学 Abnormality detection method based on distillation and memory bank guide reconstruction
US20240119571A1 (en) * 2022-09-27 2024-04-11 Korea University Research And Business Foundation Knowledge distillation-based system for learning of teacher model and student model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230368372A1 (en) * 2021-12-03 2023-11-16 Contemporary Amperex Technology Co., Limited Fast anomaly detection method and system based on contrastive representation distillation
US20240119571A1 (en) * 2022-09-27 2024-04-11 Korea University Research And Business Foundation Knowledge distillation-based system for learning of teacher model and student model
CN117173131A (en) * 2023-09-05 2023-12-05 天津大学 Abnormality detection method based on distillation and memory bank guide reconstruction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIAOHU ZHENG等: "Rapid Detection Technology of Sewing Thread Based on Deblurgan-Bsv3 Defuzzification Algorithm and St-Fpn Detection Algorithm", pages 2 - 20, Retrieved from the Internet <URL:《https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4756213》> *

Similar Documents

Publication Publication Date Title
CN108198184B (en) Method and system for vessel segmentation in contrast images
CN108229277B (en) Gesture recognition method, gesture control method, multilayer neural network training method, device and electronic equipment
JP6088792B2 (en) Image detection apparatus, control program, and image detection method
CN110070531B (en) Model training method for detecting fundus picture, and fundus picture detection method and device
CN109191476A (en) The automatic segmentation of Biomedical Image based on U-net network structure
Sim et al. MaD-DLS: mean and deviation of deep and local similarity for image quality assessment
CN105960657B (en) Use the facial super-resolution of convolutional neural networks
CN112164082A (en) Method for segmenting multi-modal MR brain image based on 3D convolutional neural network
CN110543848B (en) Driver action recognition method and device based on three-dimensional convolutional neural network
CN112950561A (en) Optical fiber end face defect detection method, device and storage medium
CN108664843A (en) Live subject recognition methods, equipment and computer readable storage medium
CN110490232A (en) Method, apparatus, the equipment, medium of training literal line direction prediction model
CN108470185A (en) The atural object annotation equipment and method of satellite image
CN110852974B (en) Image anti-aliasing processing method, training method and device of image generator
JP7238998B2 (en) Estimation device, learning device, control method and program
JP7019104B2 (en) Threshold learning method
CN112785540B (en) Diffusion weighted image generation system and method
CN111401209B (en) Action recognition method based on deep learning
US20120170861A1 (en) Image processing apparatus, image processing method and image processing program
CN113129214A (en) Super-resolution reconstruction method based on generation countermeasure network
CN118411369A (en) Sewing stitch defect detection method and system based on machine vision
CN117037244A (en) Face security detection method, device, computer equipment and storage medium
CN116884036A (en) Live pig posture detection method, device, equipment and medium based on YOLOv5DA
CN116823910A (en) MRI reconstruction method and computer program product
Yan et al. An accurate saliency prediction method based on generative adversarial networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination