CN116486116B - Machine vision-based method for detecting abnormality of hanging machine for clothing processing - Google Patents

Machine vision-based method for detecting abnormality of hanging machine for clothing processing Download PDF

Info

Publication number
CN116486116B
CN116486116B CN202310714740.0A CN202310714740A CN116486116B CN 116486116 B CN116486116 B CN 116486116B CN 202310714740 A CN202310714740 A CN 202310714740A CN 116486116 B CN116486116 B CN 116486116B
Authority
CN
China
Prior art keywords
pixel point
feature
current
central pixel
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310714740.0A
Other languages
Chinese (zh)
Other versions
CN116486116A (en
Inventor
王宜明
王艳玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jining Daai Garment Co ltd
Original Assignee
Jining Daai Garment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jining Daai Garment Co ltd filed Critical Jining Daai Garment Co ltd
Priority to CN202310714740.0A priority Critical patent/CN116486116B/en
Publication of CN116486116A publication Critical patent/CN116486116A/en
Application granted granted Critical
Publication of CN116486116B publication Critical patent/CN116486116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The application relates to the technical field of machine vision, in particular to a machine vision-based method for detecting abnormality of a hanger for clothing processing, which specifically comprises the following steps: shooting clothes hung on the hanging machine by using a camera arranged on the hanging machine, and converting the shot image into a gray image; extracting current characteristic information of the gray level image, wherein the characteristic information comprises current texture characteristics and current spatial position characteristics; calculating a first similarity between the current feature information and the reference feature information; and if the first similarity is smaller than a first preset value, sending out early warning so as to process the currently hung clothing. Specifically, the method can improve the matching degree of clothes and more accurately monitor the abnormality in the hanging machine flow for clothing processing.

Description

Machine vision-based method for detecting abnormality of hanging machine for clothing processing
Technical Field
The application relates to the technical field of machine vision, in particular to a machine vision-based method for detecting abnormality of a hanger for clothing processing.
Background
The scale of the clothing industry in China is huge, and the clothing production and marketing technology is a large country with a huge number of textile clothing in the world. With the continuous development of society, the traditional clothing production mode exposes a plurality of defects, such as continuous improvement of labor cost, low labor production efficiency, uneven quality of finished products and the like.
In relatively speaking, the transformation and upgrading steps of the clothing industry in China are also accelerated, more and more enterprises begin to take steps to produce intelligent steps, and the international connection is adopted to cope with the development trend of the clothing market in the future. The intelligent automatic production line of clothing production and processing is kept away from, and a reliable and reasonable production line can enable the whole work flow to be faster and more efficient, and can save cost and improve the quality of finished products. At present, a hanging machine for clothing processing is a 'streamline' operation machine, the hanging mode provides convenience for clothing processing, clothing is transported to the next process after one process is finished, but in the processing process, workers can take down the clothing, hang the clothing on the hanging machine for transporting the next process after the process is finished, and the conditions of missed hanging, clothing falling, processing damage or unequal hanging positions can occur in the process and the final warehouse entry.
Therefore, an improvement is needed in the existing detection method for the abnormality of the hanging machine.
Disclosure of Invention
The application provides a machine vision-based method for detecting the abnormality of a hanging machine for clothing processing, which can improve the matching degree of clothing and more accurately monitor the abnormality in the hanging machine flow for clothing processing.
In a first aspect, the present application provides a machine vision-based method for detecting abnormalities of a hanger for clothing processing, including:
shooting clothes hung on the hanging machine by using a camera arranged on the hanging machine, and converting the shot image into a gray image;
extracting current characteristic information of the gray image, wherein the characteristic information comprises current texture characteristics and current spatial position characteristics;
calculating a first similarity between the current feature information and the reference feature information;
and if the first similarity is smaller than a first preset value, sending out early warning so as to process the currently hung clothing.
In an alternative implementation, extracting current feature information of the gray scale image includes:
calculating gray scale differences of neighborhood pixel points in the central pixel point area by utilizing a feature detection algorithm BRIEF so as to obtain the current spatial position feature;
comparing gray values of the central pixel point and the neighborhood pixel points in the central pixel point area by utilizing a local binary algorithm LBP so as to obtain the current texture characteristics;
and combining the current spatial position characteristic with the current texture characteristic to obtain the current characteristic information.
In an alternative implementation, the representation of the current texture feature and the representation of the current spatial location feature are both binary strings, so that the current spatial location feature is combined with the current texture feature.
In an optional implementation, the central pixel area is a preset area centered on the central pixel;
the central pixel point area in the characteristic detection algorithm BRIEF is the same as the central pixel point area in the local binary algorithm LBP in size.
In an alternative implementation, the representation of the current texture feature and the representation of the current spatial location feature are both binary strings, and the number of bits of the binary strings is the same.
In an alternative implementation, comparing the gray value of the central pixel point with the gray value of the neighboring pixel points in the central pixel point area by using a local binary algorithm LBP to obtain the current texture feature, including:
calculating a first initial texture feature of the central pixel point and a second initial texture feature of each neighborhood pixel point in the central pixel point area by using the C-T operator matrix; wherein C represents a central pixel point and T represents texture;
calculating a second similarity of the first initial texture feature and each of the second initial texture features;
the current texture feature is determined based on the second similarity.
In an alternative implementation, calculating a first initial texture feature of the center pixel point and a second initial texture feature of each neighboring pixel point in the center pixel point region using the C-T operator matrix includes:
calculating the number of 1's in the C-T operator matrix, and taking the number as a first identifier for representing the texture feature direction of the central pixel point; performing exclusive-or calculation on the C-T operator matrix, counting the number of 1's in the C-T operator matrix after the exclusive-or calculation, and taking the number as a second identifier for identifying the inclination degree of the texture features;
calculating a symbiotic matrix corresponding to the center pixel point based on the first identifier and the second identifier of the center pixel point;
performing class concentration calculation on the co-occurrence matrix to obtain a calculation result, wherein the calculation result respectively comprises an entropy value corresponding to the first identifier and an entropy value corresponding to the second identifier;
and calculating based on the entropy value corresponding to the first identifier and the entropy value corresponding to the second identifier in the calculation result to obtain a cross-correlation coefficient of the central pixel point and a cross-correlation coefficient of each neighborhood pixel point in the central pixel point area, wherein the cross-correlation coefficient of the central pixel point is determined to be a first initial texture feature, and the cross-correlation coefficient of each neighborhood pixel point in the central pixel point area is determined to be a second initial texture feature.
In an optional implementation, combining the current spatial location feature with the current texture feature to obtain the current feature information includes:
determining a spatial location parameter between the current texture feature and the current spatial location feature;
and combining the spatial position parameters before the current spatial position features and the current texture features, so as to obtain the current feature information.
In an alternative implementation, if the first similarity is smaller than a second preset value, an early warning is sent out.
In an alternative implementation, after shooting the garment hung on the hanging machine by using a camera arranged on the hanging machine and converting the shot image into a gray-scale image, the method comprises the following steps:
denoising the gray level image by using a median filtering method;
and performing foreground and background separation on the denoised image by using a threshold segmentation algorithm to extract a clothing region.
The application has the beneficial effects that the method is different from the prior art, and the method specifically comprises the following steps: shooting clothes hung on the hanging machine by using a camera arranged on the hanging machine, and converting the shot image into a gray image; extracting current characteristic information of the gray level image, wherein the characteristic information comprises current texture characteristics and current spatial position characteristics; calculating a first similarity between the current feature information and the reference feature information; and if the first similarity is smaller than a first preset value, sending out early warning so as to process the currently hung clothing. Specifically, the method can improve the matching degree of clothes and more accurately monitor the abnormality in the hanging machine flow for clothing processing.
Drawings
FIG. 1 is a flow chart of an embodiment of a machine vision based method for detecting anomalies in a hanger for clothing processing according to the present application;
FIG. 2 is a flowchart illustrating an embodiment of step S12 in FIG. 1;
FIG. 3 is a schematic diagram of a pixel window;
FIG. 4 is a flowchart illustrating an embodiment of the step S22 in FIG. 2;
fig. 5 is a schematic flow chart of an embodiment of step S221 in fig. 4.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which are obtained by persons of ordinary skill in the art without undue burden on the person of ordinary skill in the art based on the embodiments of the present application, are within the scope of the present application.
The present application will be described in detail with reference to the accompanying drawings and examples.
Referring to fig. 1, fig. 1 is a flowchart of an embodiment of a machine vision-based method for detecting an abnormality of a hanger for clothing processing according to the present application, the method includes:
step S11: and shooting the clothing hung on the hanging machine by using a camera arranged on the hanging machine, and converting the shot image into a gray image.
Specifically, for different links of the clothes in the production line, such as midway transportation or end warehouse entry, a proper position is selected on a hanging machine for clothing processing to fix a camera position, and an LED floodlight matched with the camera position is adopted, so that a light source uniformly and stably irradiates the clothes, and simultaneously, a high-definition industrial camera is used for carrying out front view shooting on the clothes on the hanging machine.
Further, the clothes image on the hanging frame shot by the industrial camera is preprocessed, and the image is converted into the gray image by an average value method, so that the gray image is softer.
In which, since an image is photographed in a laundry production shop, the main noise type of which is gaussian noise, after the photographed image is converted into a gray image, the gray image is noise-reduced using a median filtering method. And performing foreground and background separation on the denoised image by using a threshold segmentation algorithm to extract a clothing region.
Specifically, in this embodiment, the OTSU algorithm is used to implement image binarization, separate the foreground and background of the image, and restore the separated foreground area to a gray image, that is, a complete clothing image.
Step S12: extracting current characteristic information of the gray image, wherein the characteristic information comprises current texture characteristics and current spatial position characteristics.
Referring to fig. 2, fig. 2 is a schematic flow chart of an embodiment of step S12 in fig. 1. Step S12 includes:
step S21: and calculating the gray scale difference of the neighborhood pixel points in the central pixel point area by using a feature detection algorithm BRIEF so as to obtain the current spatial position feature.
Specifically, in the FAST algorithm with the ORB improved, in order to realize scale invariance, an image pyramid technology is used, but in the actual application scene, the size of the same garment is not changed based on the fixing operation of the camera position, so that the robustness of the image pyramid on the scale change is not realized. The BRIEF characterization algorithm used in the ORB algorithm may still have a relatively high matching effect when the deflection is less than 30 °, and for hangers of hangers for clothing processing, the inclination angle is not greater than 30 °, so that the BRIEF characterization algorithm may be used in an embodiment of the application.
In another embodiment of the present application, an rBRIEF feature descriptor algorithm may be further used in the ORB algorithm, where the rBRIEF feature descriptor algorithm is modified by adding a twiddle factor based on the BRIEF feature descriptor algorithm.
The result calculated by the BRIEF algorithm is a feature descriptor of a binary string, all pixel point pairs in the neighborhood are compared to generate a binary string with a length of n, generally n is 128, 256 or 512, and 256 pixel point pairs are selected in consideration of the real-time calculation amount required in the scene and accuracy requirements. In BRIEF algorithm, there are five methods for selecting point pairs in the feature point region, which are not expanded one by one, and in the present application, point pairs are selected in the feature point S x S regionAnd->The method of (1) uses compliance->The gaussian distribution of (c) is sufficient.
In the above step, the gray level difference of the neighborhood pixel points in the central pixel point area is calculated by using the feature detection algorithm BRIEF to obtain the current spatial position feature, the descriptors of the current spatial position feature are binary strings, and only the Hamming distance is used for calculating the feature descriptors for matching, namely the Hamming distance is used for representing the similarity between the feature descriptors. Based on the characteristics of binary symbols, the ORB algorithm can be operated in computer hardware, so that the ORB algorithm is quite efficient in processing an image matching scene. However, in this algorithm, the BRIEF descriptor is generated by calculating the gray level difference of the pixels in the area around the feature point, which indicates that the spatial position feature can be obtained, and is not effective for the image with complex texture, because the gray level difference of the pixels around the feature point does not indicate the texture information of the image well. Therefore, in order to improve the discrimination of the descriptors, the BRIEF algorithm is described in an assisted manner by combining the local binary algorithm LBP.
Step S22: and comparing the gray values of the central pixel point and the neighborhood pixel points in the central pixel point area by using a local binary algorithm LBP so as to obtain the current texture characteristics.
In particular, in the local gray image of the clothes, the texture features of the clothes can be obviously observed, if supplementary description of the texture features can be added into the descriptor, the accuracy of the feature points is higher when the feature points are matched. Therefore, the application also uses the local binary algorithm LBP to compare the gray value of the central pixel point and the neighborhood pixel point in the central pixel point area so as to obtain the current texture feature.
Specifically, the local binary algorithm of LBP is a decimal number determined by comparing the gray value of the central pixel point with the gray value of the neighborhood pixel point, but a binary string is generated in the actual comparison process and is finally converted into a single decimal number to be used as an LBP code, and compared with a BRIEF descriptor, the local binary algorithm of LBP is also a binary string and is based on the similarity of the binary string and the BRIEF descriptor in digital form, so that the binary string is not converted into a decimal LBP code during LBP calculation, and the binary string is directly used for representing the point texture, thereby facilitating the subsequent combination with the BRIEF descriptor to supplement feature point description. That is, the representation of the current texture feature and the representation of the current spatial location feature are both binary strings, so that the current spatial location feature is combined with the current texture feature.
Specifically, in the BRIEF original algorithm, the S selection value is 31, that is, the pixel pair is determined in a 31×31 pixel area centered on the feature point, and the neighborhood range of the center point selected in the LBP original algorithm is 3*3, so that the calculation process is optimized in a unified manner in the calculation measure, and the neighborhood of the center pixel point selected in the LBP algorithm is also changed to a 31×31 pixel area. The central pixel point area in the characteristic detection algorithm BRIEF is the same as the central pixel point area in the local binary algorithm LBP in size, wherein the central pixel point area is a preset area taking the central pixel point as the center.
In the application, the representation form of the current texture feature and the representation form of the current spatial position feature are binary character strings, and the bit numbers of the binary character strings are the same. In a specific embodiment, the binary string number corresponding to the LBP code and the BRIEF descriptor is 256 bits.
Specifically, in a general scenario, the number of bits of the binary string corresponding to the LBP code is generally 8 bits to 16 bits, but the number of bits is extended to 256 bits, and since the matching process of the LBP and the BRIEF descriptor binary sequence only uses hamming distance calculation, the number of bits of the final binary string will not cause a relatively large delay to the calculation process, but the process of determining the binary string in the LBP algorithm will have a certain influence due to the extension of the number of bits, and the sampling points of the 256-bit binary string of the LBP will be discussed below.
The corner points in the image, namely the points with larger curvature change in the image, are determined by the FAST algorithm, and are not only located at the edges of the clothes, but also located in pockets, labels and the like on the clothes.
Referring to fig. 3, there are two cases for the existence position of a feature point (corner point) on the laundry, at the edge of the laundry, as point a, and at the non-edge, as point B. The texture features of the blank background portion need not be considered at all for point a, whereas the non-shadow portion does not belong to the texture region on the foreground portion where point B is located for point B, so only the gray values of the shadow region need to be considered in the LBP binary sequence sampling of the feature points in both cases. As can be seen from observing the texture features around the feature points, the edge regions constituting the feature points have the same texture feature vector, that is, the sampling point region of the LBP algorithm can be determined by characterizing the feature point texture vector.
Referring to fig. 4, fig. 4 is a schematic flow chart of an embodiment of step S22 in fig. 2; specifically, step S22 includes:
step S221: calculating a first initial texture feature of the central pixel point and a second initial texture feature of each neighborhood pixel point in the central pixel point area by using the C-T operator matrix; where C represents the center pixel and T represents the texture.
Firstly, comparing all pixel points in a 31 x 31 neighborhood window of a central pixel point with the central pixel point by using an LBP algorithm to be recorded as binary values, so that (31 x 31-1) binary numbers are arranged in the neighborhood window of the central pixel point, and the value of a central characteristic point of the LBP algorithm is set to be 1. Considering that the operation speed of the garment hanger requires smaller calculation amount, the minimum operator capable of representing the texture vector is used to traverse the LBP binary value of each pixel point in a 31 x 31 neighborhood window of the central pixel point so as to obtain the texture vector of the position of each pixel point, and the texture vector is marked as a C-T vector operator as shown in the figure, wherein C represents the central pixel point (such as a corner point), and T represents the texture. Wherein, the matrix form of the operator is recorded as:
where S represents the vector value of the center pixel.
In one embodiment, referring to fig. 5, step S221 includes:
step S2211: calculating the number of 1's in the C-T operator matrix, and taking the number as a first identifier for representing the texture feature direction of the central pixel point; and performing exclusive-or calculation on the C-T operator matrix, counting the number of 1's in the C-T operator matrix after the exclusive-or calculation, and taking the number as a second identifier for identifying the inclination degree of the texture features.
Specifically, the whole characteristic point is traversed by a C-T operatorThe pixel points in the neighborhood window and the binary matrix in the neighborhood of the pixel points 3*3 are subjected to logic AND calculation, the number of 1's in the calculation result matrix is recorded, and if the number is equal to 4, the texture feature vectors of the pixel points are possibly in the vertical and horizontal directions; if between 2 and 4, the texture direction may be in the vertical or horizontal direction, the result of the number accumulation is denoted as the first identifier x. The operator is then exclusive-ored with the binary matrix of pixels, and the number of "1" s in the resulting matrix is also recorded, here denoted as second identifier y. The texture vector of the pixel can be expressed as:. When the value of y is the maximum value 8, the binary value in the neighborhood window of the actual pixel point is expressed as:
that is, the larger the value of y, the more likely the texture vector of the pixel is in the oblique direction. Then according to the value of pixel point SThe characteristic vector direction can be roughly judged, and the non-background area of the clothes edge characteristic points can be screened.
Step S2212: and calculating a symbiotic matrix corresponding to the center pixel point based on the first identifier and the second identifier of the center pixel point.
Specifically, the two-dimensional matrix in the 3*3 neighborhood window is split for analysis, as follows:
wherein the left side after decompositionThe value matrix is marked as X, and the vertical horizontal vector of the texture in the 3*3 window is reflected; right side->The matrix of values is denoted Y, representing the tilt vector. Then respectively performing co-occurrence matrix calculation, wherein the magnitude of the X or Y co-occurrence matrix is the maximum value in X moment or Y moment, and the numerical value in the co-occurrence matrix is the number of adjacent pairs, for example: the maximum value within the X moment is 4, then the co-occurrence matrix is expressed as:
the value "6" in the matrix indicates that there are 6 neighbor pairs adjacent to 4 in the X moment.
Step S2213: and performing class concentration calculation on the co-occurrence matrix to obtain a calculation result, wherein the calculation result respectively comprises an entropy value corresponding to the first identifier and an entropy value corresponding to the second identifier.
In particular, for a block region with the same texture feature vector, the data of its co-occurrence matrix should be relatively concentrated and less discrete. Then the class concentration calculation is performed on the co-occurrence matrix, and the calculation process is as follows:
wherein the method comprises the steps ofRepresenting the values in the co-occurrence matrix,/-, and>representation dot->Minimum distance from maximum value in the co-occurrence matrix. The co-occurrence matrix of the corresponding first identifier X and second identifier Y will both obtain an entropy value, denoted as +.>And->
Step S2214: and calculating based on the entropy value corresponding to the first identifier and the entropy value corresponding to the second identifier in the calculation result to obtain a cross-correlation coefficient of the central pixel point and a cross-correlation coefficient of each neighborhood pixel point in the central pixel point area, wherein the cross-correlation coefficient of the central pixel point is determined to be a first initial texture feature, and the cross-correlation coefficient of each neighborhood pixel point in the central pixel point area is determined to be a second initial texture feature.
According to the texture characteristics of the clothes, if the texture characteristic vector of one piece of clothes is in a vertical or horizontal direction, which means that the texture characteristic vector is not in an inclined direction, the cross correlation degree of the entropy values of the co-occurrence matrix is calculated, and the calculation formula is as follows:
step S222: a second similarity is calculated for the first initial texture feature and each of the second initial texture features.
Converting LBP binary values in the whole 31 x 31 pixel neighborhood window into cross correlation coefficients in the corresponding regionThe characteristic vector value of the pixel point is accurately described by using the value, and the characteristic vector value corresponding to the binary matrix of the 3*3 neighborhood window of the characteristic point is marked as +.>. So far, all pixel points in the 31 x 31 neighborhood window of the feature points comprise feature vector values of the feature points, and corresponding calculation results are obtained. At this time, according to the characteristic vector value of each pixel point +.>Vector value +.>Similarity calculation is performed to determine LBP binary value sampling points closer to the feature point texture feature. For two one dimensionsThe data can be obtained by calculating the square difference, the similarity is marked as D, and the expression is as follows:
step S223: the current texture feature is determined based on the second similarity.
The smaller D represents the higher the similarity of the pixel point and the characteristic point texture feature.
Step S23: and combining the current spatial position characteristic with the current texture characteristic to obtain current characteristic information.
Specifically, a spatial location parameter between the current texture feature and the current spatial location feature is determined. And combining the spatial position parameters before the current spatial position features and the current texture features, so as to obtain the current feature information.
In this way, 256 LBP binary number sampling points are selected from a 31×31 neighborhood window according to the ordering of the D values of each pixel point, 256 LBP binary character strings of characteristic points and 256 BRIEF descriptors can be obtained, and 512 LBP-BRIEF descriptors can be obtained by connecting the two binary character strings end to end. Considering that the selection of the binary values of the LBP of the pixel points is implemented in two-dimensional space, it is possible that the pixel points are located at different positions within the neighborhood window and have the same similarity D, so the spatial states of the LBP sampling points are also described in addition here. For two points in two-dimensional space, namely the sampling point of LBP binary value and the spatial position of the characteristic point, selecting the minimum distance between the two points asThe angle between the straight line and the horizontal direction determined by the minimum distance between the two points is recorded as +.>For distance->The maximum value is 31×31 neighborhood window corner point, about 22 pixels, and converted into binary systemThe value is 5 bits; and +.about.the angle between the straight line and the horizontal direction>And for maximum 360 ° (0 °), to a binary value of 9 bits. The space position between the LBP sampling point and the characteristic point can be expressed by splicing the binary character strings with 5 bits and 9 bits, and is marked as R.
The spatial position parameter R is arranged in front of the LBP-BRIEF descriptor and is combined with the RLBP-BRIEF descriptor to describe the characteristic points of the clothes specifically, wherein the characteristic points comprise texture characteristics and spatial positions of the texture characteristics.
Step S13: a first similarity between the current feature information and the reference feature information is calculated.
The feature points and descriptors of the hanging machine hanger clothes image for clothing processing can be obtained, the current feature information is obtained, similarity matching calculation is conducted on the feature points and descriptors and the descriptors of the normally hanging clothes, namely reference feature confidence, and the Hamming distance is used for similarity calculation.
Step S14: and if the first similarity is smaller than a first preset value, sending out early warning so as to process the currently hung clothing.
And if the first similarity is smaller than a first preset value, sending out early warning so as to process the currently hung clothing. In an embodiment, the first preset value may be set to 80%, and when the first similarity is less than 80%, an early warning is sent. In another embodiment, a second preset value may also be set, which may be set to 70%, and when the first similarity is less than 80%, an early warning is issued. In a specific embodiment, if the first similarity is greater than 80% and the first similarity is normal, no early warning is performed.
It will be appreciated that the threshold for similarity may be set between 70% and 80%.
The application improves the method for monitoring the abnormality of the hanging machine for clothing processing, comprises the traditional manual visual detection method and the like, uses an industrial camera to shoot the clothing in the working process of the hanging machine, and uses a machine vision scheme to monitor and pre-warn the abnormality of the clothing on the hanging machine hanging frame. The method comprises the steps of using an ORB algorithm, matching a clothes image in hanging operation with a normal clothes image to screen for anomalies, performing scene analysis optimization on the ORB algorithm, improving possible defects of description feature points of descriptors in the scene in the algorithm, using an LBP algorithm to carry out supplementary description on BRIEF descriptors based on clothes texture features, and determining sampling points of the LBP algorithm according to texture feature vectors in a field window of the BRIEF descriptors 31 x 31, so that feature point description determined by the FAST algorithm is more accurate, clothes matching degree is improved, and anomalies in a hanging machine flow for clothing processing are monitored more accurately.
The foregoing is only the embodiments of the present application, and therefore, the patent scope of the application is not limited thereto, and all equivalent structures or equivalent processes using the descriptions of the present application and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the application.

Claims (7)

1. The machine vision-based method for detecting the abnormality of the hanging machine for clothing processing is characterized by comprising the following steps of:
shooting clothes hung on the hanging machine by using a camera arranged on the hanging machine, and converting the shot image into a gray image;
extracting current characteristic information of the gray image, wherein the characteristic information comprises current texture characteristics and current spatial position characteristics;
calculating a first similarity between the current feature information and the reference feature information;
if the first similarity is smaller than a first preset value, sending out early warning so as to process the currently hung clothing;
extracting current characteristic information of the gray image, including:
calculating gray scale differences of neighborhood pixel points in the central pixel point area by utilizing a feature detection algorithm BRIEF so as to obtain the current spatial position feature;
comparing gray values of the central pixel point and the neighborhood pixel points in the central pixel point area by utilizing a local binary algorithm LBP so as to obtain the current texture characteristics;
combining the current spatial position feature with the current texture feature to obtain the current feature information;
comparing the gray value of the central pixel point with the gray value of the neighborhood pixel point in the central pixel point area by using a local binary algorithm LBP to obtain the current texture feature, wherein the method comprises the following steps:
calculating a first initial texture feature of the central pixel point and a second initial texture feature of each neighborhood pixel point in the central pixel point area by using the C-T operator matrix; wherein C represents a central pixel point and T represents texture;
calculating a second similarity of the first initial texture feature and each of the second initial texture features;
determining the current texture feature based on the second similarity;
calculating a first initial texture feature of the central pixel point and a second initial texture feature of each neighborhood pixel point in the central pixel point area by using the C-T operator matrix, wherein the first initial texture feature comprises:
calculating the number of 1's in the C-T operator matrix, and taking the number as a first identifier for representing the texture feature direction of the central pixel point; performing exclusive-or calculation on the C-T operator matrix, counting the number of 1's in the C-T operator matrix after the exclusive-or calculation, and taking the number as a second identifier for identifying the inclination degree of the texture features;
calculating a symbiotic matrix corresponding to the center pixel point based on the first identifier and the second identifier of the center pixel point;
performing class concentration calculation on the co-occurrence matrix to obtain a calculation result, wherein the calculation result respectively comprises an entropy value corresponding to the first identifier and an entropy value corresponding to the second identifier;
and calculating based on the entropy value corresponding to the first identifier and the entropy value corresponding to the second identifier in the calculation result to obtain a cross-correlation coefficient of the central pixel point and a cross-correlation coefficient of each neighborhood pixel point in the central pixel point area, wherein the cross-correlation coefficient of the central pixel point is determined to be a first initial texture feature, and the cross-correlation coefficient of each neighborhood pixel point in the central pixel point area is determined to be a second initial texture feature.
2. The method of claim 1, wherein the representation of the current texture feature and the representation of the current spatial location feature are both binary strings to facilitate the combination of the current spatial location feature and the current texture feature.
3. The method of claim 1, wherein the central pixel area is a preset area centered on the central pixel;
the central pixel point area in the characteristic detection algorithm BRIEF is the same as the central pixel point area in the local binary algorithm LBP in size.
4. The method of claim 1, wherein the representation of the current texture feature and the representation of the current spatial location feature are both binary strings, and the number of bits of the binary strings is the same.
5. The method of claim 1, wherein combining the current spatial location feature with the current texture feature to obtain the current feature information comprises:
determining a spatial location parameter between the current texture feature and the current spatial location feature;
and combining the spatial position parameters before the current spatial position features and the current texture features, so as to obtain the current feature information.
6. The method of claim 1, wherein an early warning is issued if the first similarity is less than a second preset value.
7. The method of claim 1, wherein after capturing the garment suspended from the hanger using the camera disposed on the hanger and converting the captured image into a gray scale image, comprising:
denoising the gray level image by using a median filtering method;
and performing foreground and background separation on the denoised image by using a threshold segmentation algorithm to extract a clothing region.
CN202310714740.0A 2023-06-16 2023-06-16 Machine vision-based method for detecting abnormality of hanging machine for clothing processing Active CN116486116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310714740.0A CN116486116B (en) 2023-06-16 2023-06-16 Machine vision-based method for detecting abnormality of hanging machine for clothing processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310714740.0A CN116486116B (en) 2023-06-16 2023-06-16 Machine vision-based method for detecting abnormality of hanging machine for clothing processing

Publications (2)

Publication Number Publication Date
CN116486116A CN116486116A (en) 2023-07-25
CN116486116B true CN116486116B (en) 2023-08-29

Family

ID=87223472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310714740.0A Active CN116486116B (en) 2023-06-16 2023-06-16 Machine vision-based method for detecting abnormality of hanging machine for clothing processing

Country Status (1)

Country Link
CN (1) CN116486116B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794491A (en) * 2015-04-28 2015-07-22 重庆大学 Fuzzy clustering steel plate surface defect detection method based on pre classification
CN106384126A (en) * 2016-09-07 2017-02-08 东华大学 Clothes pattern identification method based on contour curvature feature points and support vector machine
CN107146246A (en) * 2017-05-08 2017-09-08 湘潭大学 One kind is used for workpiece machining surface background texture suppressing method
CN109960965A (en) * 2017-12-14 2019-07-02 翔升(上海)电子技术有限公司 Methods, devices and systems based on unmanned plane identification animal behavior
CN110232133A (en) * 2019-05-16 2019-09-13 华中科技大学 A kind of image of clothing search method and system classified based on Fusion Features and style
CN112330634A (en) * 2020-11-05 2021-02-05 恒信东方文化股份有限公司 Method and system for fine edge matting of clothing
WO2022011952A1 (en) * 2020-07-11 2022-01-20 贝塔科技(苏州)有限公司 Video recognition positioning system and method applied to garment hot stamping
CN114612469A (en) * 2022-05-09 2022-06-10 武汉中导光电设备有限公司 Product defect detection method, device and equipment and readable storage medium
CN114663803A (en) * 2022-02-28 2022-06-24 宝开(上海)智能物流科技有限公司 Logistics center hanging clothing classification method and device based on video streaming
CN114842469A (en) * 2022-05-16 2022-08-02 内蒙古工业大学 Self-adaptive identification method and system for mature fruits
CN115272316A (en) * 2022-09-27 2022-11-01 山东华太新能源电池有限公司 Intelligent detection method for welding quality of battery cover based on computer vision
CN115294137A (en) * 2022-10-09 2022-11-04 南通市通州区欢伴纺织品有限公司 Cloth surface color bleeding defect detection method
CN115311303A (en) * 2022-10-12 2022-11-08 南通富兰妮纺织品有限公司 Textile warp and weft defect detection method
JP2022170432A (en) * 2021-04-28 2022-11-10 キヤノン株式会社 Image processing apparatus and image processing method
CN115641254A (en) * 2022-10-13 2023-01-24 北京沃东天骏信息技术有限公司 Migration method and device
CN115713694A (en) * 2023-01-06 2023-02-24 东营国图信息科技有限公司 Land surveying and mapping information management method
CN116151931A (en) * 2023-04-04 2023-05-23 济宁大爱服装有限公司 Cross-border e-commerce sales integrated data processing system based on artificial intelligence

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6916765B2 (en) * 2003-03-06 2005-07-12 The C. W. Zumbiel Co. Consumer product package and method of manufacture
US20170119298A1 (en) * 2014-09-02 2017-05-04 Hong Kong Baptist University Method and Apparatus for Eye Gaze Tracking and Detection of Fatigue
US10032279B2 (en) * 2015-02-23 2018-07-24 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
CN112950508B (en) * 2021-03-12 2022-02-11 中国矿业大学(北京) Drainage pipeline video data restoration method based on computer vision

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794491A (en) * 2015-04-28 2015-07-22 重庆大学 Fuzzy clustering steel plate surface defect detection method based on pre classification
CN106384126A (en) * 2016-09-07 2017-02-08 东华大学 Clothes pattern identification method based on contour curvature feature points and support vector machine
CN107146246A (en) * 2017-05-08 2017-09-08 湘潭大学 One kind is used for workpiece machining surface background texture suppressing method
CN109960965A (en) * 2017-12-14 2019-07-02 翔升(上海)电子技术有限公司 Methods, devices and systems based on unmanned plane identification animal behavior
CN110232133A (en) * 2019-05-16 2019-09-13 华中科技大学 A kind of image of clothing search method and system classified based on Fusion Features and style
WO2022011952A1 (en) * 2020-07-11 2022-01-20 贝塔科技(苏州)有限公司 Video recognition positioning system and method applied to garment hot stamping
CN112330634A (en) * 2020-11-05 2021-02-05 恒信东方文化股份有限公司 Method and system for fine edge matting of clothing
JP2022170432A (en) * 2021-04-28 2022-11-10 キヤノン株式会社 Image processing apparatus and image processing method
CN114663803A (en) * 2022-02-28 2022-06-24 宝开(上海)智能物流科技有限公司 Logistics center hanging clothing classification method and device based on video streaming
CN114612469A (en) * 2022-05-09 2022-06-10 武汉中导光电设备有限公司 Product defect detection method, device and equipment and readable storage medium
CN114842469A (en) * 2022-05-16 2022-08-02 内蒙古工业大学 Self-adaptive identification method and system for mature fruits
CN115272316A (en) * 2022-09-27 2022-11-01 山东华太新能源电池有限公司 Intelligent detection method for welding quality of battery cover based on computer vision
CN115294137A (en) * 2022-10-09 2022-11-04 南通市通州区欢伴纺织品有限公司 Cloth surface color bleeding defect detection method
CN115311303A (en) * 2022-10-12 2022-11-08 南通富兰妮纺织品有限公司 Textile warp and weft defect detection method
CN115641254A (en) * 2022-10-13 2023-01-24 北京沃东天骏信息技术有限公司 Migration method and device
CN115713694A (en) * 2023-01-06 2023-02-24 东营国图信息科技有限公司 Land surveying and mapping information management method
CN116151931A (en) * 2023-04-04 2023-05-23 济宁大爱服装有限公司 Cross-border e-commerce sales integrated data processing system based on artificial intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
自适应灰度加权的鲁棒模糊C均值图像分割;陆海青;葛洪伟;;智能系统学报(04);全文 *

Also Published As

Publication number Publication date
CN116486116A (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN115082419B (en) Blow-molded luggage production defect detection method
US11688057B2 (en) Method and system for quickly matching image features applied to mine machine vision
CN110148130B (en) Method and device for detecting part defects
WO2022099598A1 (en) Video dynamic target detection method based on relative statistical features of image pixels
CN108647706B (en) Article identification classification and flaw detection method based on machine vision
CN110335233B (en) Highway guardrail plate defect detection system and method based on image processing technology
CN110288571B (en) High-speed rail contact net insulator abnormity detection method based on image processing
CN113706490B (en) Wafer defect detection method
CN107016394A (en) A kind of decussating fibers characteristic point matching method
Bullkich et al. Moving shadow detection by nonlinear tone-mapping
CN109781737A (en) A kind of detection method and its detection system of hose surface defect
CN114863464B (en) Second-order identification method for PID drawing picture information
CN111667475A (en) Machine vision-based Chinese date grading detection method
CN111665199A (en) Wire and cable color detection and identification method based on machine vision
CN111028263B (en) Moving object segmentation method and system based on optical flow color clustering
CN114092478B (en) Anomaly detection method
CN111932490A (en) Method for extracting grabbing information of visual system of industrial robot
CN110569716A (en) Goods shelf image copying detection method
CN116486116B (en) Machine vision-based method for detecting abnormality of hanging machine for clothing processing
Kaur et al. 2-D geometric shape recognition using canny edge detection technique
CN106446832B (en) Video-based pedestrian real-time detection method
CN113139946A (en) Shirt stain positioning device based on vision
Abdusalomov et al. Robust shadow removal technique for improving image enhancement based on segmentation method
CN114119658A (en) Following algorithm for multi-feature self-adaptive fusion
CN113723314A (en) Sugarcane stem node identification method based on YOLOv3 algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant