CN110865084A - Self-learning mode-based harness wire separation detection system and method - Google Patents

Self-learning mode-based harness wire separation detection system and method Download PDF

Info

Publication number
CN110865084A
CN110865084A CN201911260890.9A CN201911260890A CN110865084A CN 110865084 A CN110865084 A CN 110865084A CN 201911260890 A CN201911260890 A CN 201911260890A CN 110865084 A CN110865084 A CN 110865084A
Authority
CN
China
Prior art keywords
yarn
image
information
detected
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911260890.9A
Other languages
Chinese (zh)
Inventor
李守斌
唐冲
刘洋洋
Original Assignee
Lightning Kunshan Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lightning Kunshan Intelligent Technology Co ltd filed Critical Lightning Kunshan Intelligent Technology Co ltd
Priority to CN201911260890.9A priority Critical patent/CN110865084A/en
Publication of CN110865084A publication Critical patent/CN110865084A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8854Grading and classifying of flaws
    • G01N2021/8874Taking dimensions of defect into account
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8883Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention relates to the technical field of textiles, in particular to a self-learning mode-based heddle separation detection system and a self-learning mode-based heddle separation detection method. The system comprises an image acquisition unit for acquiring a sample image of the yarn to be detected, a sample preprocessing module for preprocessing the sample image, a flaw detection module for detecting a flaw area of the preprocessed image, a feature extraction module for extracting feature vector information of the detected flaw area image, a yarn information database for storing information of each standard yarn, a yarn classification detection module for classifying and judging the yarn to be detected and single/multiple yarns according to the feature vector information and the information of each standard yarn, and a PC control terminal for performing corresponding heddle separation control according to the classification result of the yarn to be detected and the judgment result of the single/multiple yarns. The invention can carry out intelligent yarn classification and single/multi-yarn detection on the yarns in the harness wire separation process, effectively improves the yarn classification and detection efficiency and reduces the machine error.

Description

Self-learning mode-based harness wire separation detection system and method
Technical Field
The invention relates to the technical field of textiles, in particular to a self-learning mode-based heddle separation detection system and a self-learning mode-based heddle separation detection method.
Background
The traditional handicraft textile technology has gradually been replaced by automated flow line industrial production, in the production field of yarn industry, through the complete industrial flow line process, we can realize the output issue of hundreds of millions of times per day, and the quality problem becomes more important in the face of such large quantity. In the weaving process, the harness wire elements need to be accurately separated from the stacking queue and fed into a wire leading system at a high speed, the harness wires are stored to a harness wire storage position at one time, then the harness wires are conveyed to corresponding working procedure positions on an automatic harness threading machine to be separated into single harness wires, and the subsequent working procedures are carried out after the separation. In the prior art, due to the fact that the sensitivity of a machine cannot reach the standard or is not ideal enough, or other reasons such as machine aging and the like, the problem that the machine hooks yarns by mistake or does not hook yarns in the yarn hooking process often occurs, and the product cannot reach the ideal level. At present, an image detection technology is more and more mature, and the image detection technology is gradually applied to quality detection, but due to the complex and various varieties of yarns, the traditional image detection cannot have a good solution under the condition of dealing with multiple samples and multiple scenes.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a heddle separation detection system and a heddle separation detection method based on a self-learning mode, and when the heddle separation detection system and the heddle separation detection method are applied, intelligent yarn classification and single/multiple yarn detection can be carried out on yarns in the heddle separation process, so that the yarn classification and detection efficiency is effectively improved, the machine error is reduced, and the product quality is improved.
The technical scheme adopted by the invention is as follows:
harness wire separation detection system based on self-learning mode includes: the device comprises an image acquisition unit for acquiring a sample image of the yarn to be detected, a sample preprocessing module for preprocessing the sample image, a flaw detection module for detecting a flaw area of the preprocessed image, a feature extraction module for extracting a feature vector of the detected flaw area image, a yarn information database for storing information of each standard yarn, a yarn classification detection module for classifying and judging the yarn to be detected and a single/multiple yarn according to the feature vector and the information of each standard yarn, and a PC control terminal for performing corresponding heddle separation control according to the classification result of the yarn to be detected and the judgment result of the single/multiple yarn.
Preferably, the yarn classification detection module is provided with a yarn classification training model for performing classification judgment on the yarn to be detected and a single/multiple yarn training model for performing single/multiple yarn judgment on the yarn to be detected.
Preferably, the PC control terminal comprises a CPU host, a display and a hard disk memory, and is docked with the image acquisition unit for storing and displaying the sample image acquired by the image acquisition unit.
The heddle separation detection method based on the self-learning mode comprises the following steps:
s1, hooking the yarn to be detected, and collecting a sample image of the yarn to be detected;
s2, preprocessing the collected sample image;
s3, detecting the defect area of the preprocessed image, and extracting a defect area image;
s4, extracting feature vector information of the defective region image, and extracting classification information of the preprocessed image corresponding to the defective region image;
and S5, classifying the yarns to be detected and judging single/multiple yarns according to the characteristic vector information and the classification information, and performing corresponding harness wire separation control according to the judgment result.
Preferably, in step S2, the preprocessing of the sample image includes: firstly, the sample image is subjected to image data processing, then image noise reduction processing is carried out, and finally image enhancement processing is carried out.
As a preferable mode of the foregoing solution, in step S2, the preprocessing the sample image further includes: and carrying out mean value downsampling on the image after the enhancement processing, carrying out bilinear interpolation after the mean value downsampling, then carrying out variance downsampling, and carrying out bilinear interpolation after the variance downsampling.
Preferably, in step S3, the step of detecting the defective area in the image includes:
s31, performing image enhancement processing on the image, then performing gray value detection, and acquiring an image of a gray characteristic value difference area of the target to be detected;
s32, carrying out image segmentation on the image in the gray characteristic value difference area to obtain a segmented image;
s33, carrying out filtering detection on each segmented image to obtain a filtered image;
and S34, carrying out multidirectional fusion on the filtered images to obtain a defect area image.
Preferably, in step S4, the extracted feature vector information includes yarn width information and yarn gap feature information, and the classification information includes yarn color information and yarn transmittance information.
Preferably, in step S5, the yarn width information, the yarn color information, and the yarn transmittance information are introduced into a yarn classification training model to complete classification and determination of the yarn to be tested, and the yarn transmittance information, the yarn width information, and the yarn gap characteristic information are introduced into a single/multiple yarn training model to complete single/multiple yarn determination of the yarn to be tested.
Preferably, in the step S5, when the determination result of the yarn to be measured is multiple yarns, the hooked yarn to be measured is placed back, and the yarn to be measured is hooked again, and the steps S1 to S5 are repeated until the determination result of the yarn to be measured is single yarn; and when the repeated hooking times exceed a set value and the judgment result is still multi-yarn, carrying out manual detection, and when the manual detection result is single yarn, extracting corresponding characteristic vector information to correct the single/multi-yarn training model.
The invention has the beneficial effects that:
the invention carries out automatic yarn classification and single/multi-yarn detection on the yarns in the harness wire separation process by the image detection technology, effectively improves the classification detection efficiency, reduces the machine error, improves the product quality, and simultaneously can carry out parameter updating training on the deep learning model of the classification detection according to abnormal data in the classification detection process, perfect the function and improve the classification detection precision.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic block diagram of the system architecture of the present invention;
FIG. 2 is a schematic block diagram of the connection of the PC control terminal;
FIG. 3 is a schematic view showing an image digitization process in embodiment 3;
FIG. 4 is a schematic diagram showing a preprocessing process after the image enhancement processing in embodiment 3;
fig. 5 is a schematic diagram of image acquisition of a defective area in example 4.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. Specific structural and functional details disclosed herein are merely illustrative of example embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It should be understood that the terms first, second, etc. are used merely for distinguishing between descriptions and are not intended to indicate or imply relative importance. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments of the present invention.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, B exists alone, and A and B exist at the same time, and the term "/and" is used herein to describe another association object relationship, which means that two relationships may exist, for example, A/and B, may mean: a alone, and both a and B alone, and further, the character "/" in this document generally means that the former and latter associated objects are in an "or" relationship.
It is to be understood that in the description of the present invention, the terms "upper", "vertical", "inside", "outside", and the like, refer to an orientation or positional relationship that is conventionally used for placing the product of the present invention, or that is conventionally understood by those skilled in the art, and are used merely for convenience in describing and simplifying the description, and do not indicate or imply that the device or element referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and therefore should not be considered as limiting the present invention.
It will be understood that when an element is referred to as being "connected," "connected," or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly adjacent" or "directly coupled" to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a similar manner (e.g., "between … …" versus "directly between … …", "adjacent" versus "directly adjacent", etc.).
In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," and "connected" are to be construed broadly, e.g., as meaning fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, numbers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In the following description, specific details are provided to facilitate a thorough understanding of example embodiments. However, it will be understood by those of ordinary skill in the art that the example embodiments may be practiced without these specific details. For example, systems may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other instances, well-known processes, structures and techniques may be shown without unnecessary detail in order to avoid obscuring example embodiments.
Example 1:
the embodiment provides a heddle separation detection system based on a self-learning mode, as shown in fig. 1 to 2:
the system comprises an image acquisition unit for acquiring a sample image of the yarn to be detected, a sample preprocessing module for preprocessing the sample image, a flaw detection module for detecting a flaw area of the preprocessed image, a feature extraction module for extracting a feature vector of the detected flaw area image, a yarn information database for storing information of each standard yarn, a yarn classification detection module for classifying and judging the yarn to be detected and single/multiple yarns according to the feature vector and the information of each standard yarn, and a PC control terminal for performing corresponding heddle separation control according to the classification result of the yarn to be detected and the judgment result of the single/multiple yarns. The yarn classification detection module is provided with a yarn classification training model for performing classification judgment on the yarn to be detected and a single/multi-yarn training model for performing single/multi-yarn judgment on the yarn to be detected. The PC control end comprises a CPU host, a display and a hard disk memory, and is in butt joint with the image acquisition unit and used for storing and displaying the sample image acquired by the image acquisition unit. In the image acquisition process, because of great requirements on real-time performance, a high-speed high-precision camera can be used, which has great influence on the detection result. The camera is matched with a maximum 1/27 ten thousand high-number electronic shutter, so that the instantaneous high-number phenomenon can be accurately captured, 3 ten thousand pictures are shot per second at the resolution of 512 x 512 by using a high-speed CMOS image sensor, and the maximum three cameras can be arranged on one machine to shoot simultaneously at different angles by using attached control software.
Example 2:
the heddle separation detection method based on the self-learning mode comprises the following steps:
s1, hooking the yarn to be detected, and collecting a sample image of the yarn to be detected;
s2, preprocessing the collected sample image;
s3, detecting the defect area of the preprocessed image, and extracting a defect area image;
s4, extracting feature vector information of the defective region image, and extracting classification information of the preprocessed image corresponding to the defective region image;
and S5, classifying the yarns to be detected and judging single/multiple yarns according to the characteristic vector information and the classification information, and performing corresponding harness wire separation control according to the judgment result.
Example 3:
as an optimization of the above embodiment, the process of preprocessing the sample image includes: first, the sample image is subjected to image data processing as shown in fig. 3, then image noise reduction processing is performed, and finally image enhancement processing is performed. As shown in fig. 4, the image after enhancement processing is subjected to mean downsampling, then to bilinear interpolation after mean downsampling, then to variance downsampling, and then to bilinear interpolation after variance downsampling.
When we acquire the sampled sample image, we need to perform data processing on it, that is, all the information in the image frame is converted into a form that can be processed by a computer. Specifically, the image is divided into small regions called pixels shown in the following figure, and the gradation value or luminance of each pixel is expressed as an integer. Thus, a digital image can be formed, and the data processing of the image comprises two steps of sampling and quantization. Sampling means the process of transforming a continuous image in space into discrete points, where the sampling aperture and sampling interval are two very important parameters. The detailed operation method comprises the following steps: the method comprises the steps of firstly, carrying out linear scanning along the horizontal direction from top to bottom at fixed intervals along the vertical direction to obtain one-dimensional scanning lines of gray values of all horizontal lines, and secondly, sampling one-dimensional scanning line signals at certain intervals to obtain discrete signals. The sampling code is as follows:
void CImageProcessingView::OnCy(){
if(numPicture==0){
AfxMessageBox ("sample after picture load!", MB _ OK, 0);
return;
}
CImageCYDlg dlg;
// define the sampling dialog
// display dialog box
if(dlg.DoModal()==IDOK){
// sample coordinates are initially the self-pixels of the picture
if(dlg.m_xPlace==0||dlg.m_yPlace==0){
AfxMessageBox ("input picture pixels cannot be 0!", MB _ OK, 0);
return;
}
if(dlg.m_xPlace>m_nWidth||dlg.m_yPlace>m_nHeight){
AfxMessageBox ("picture pixels cannot exceed the original image length and width!", MB _ OK, 0);
return;
}
AfxMessageBox ("Picture sample!", MB _ OK, 0);
// open temporary Picture reading File
FILE*fpo=fopen(BmpName,"rb");
FILE*fpw=fopen(BmpNameLin,"wb+");
fread(&bfh,sizeof(BITMAPFILEHEADER),1,fpo);
fread(&bih,sizeof(BITMAPINFOHEADER),1,fpo);
fwrite(&bfh,sizeof(BITMAPFILEHEADER),1,fpw);
fwrite(&bih,sizeof(BITMAPINFOHEADER),1,fpw);
fread(m_pImage,m_nImage,1,fpo);
V picture sample
int numWidth,numHeight;
// Picture taking the same pixel point in this section
int numSYWidth,numSYHeight;
// remaining period region
/' denotes numWidth numHeight as a region having the same color
A region such as 512/512-1512/512-11-1
V. dlg.m _ xPlace dlg.m _ yPlace represents the new (x, y) coordinates
V num SyWidth means that the region is unified into one color in the remaining space
numWidth=m_nWidth/dlg.m_xPlace;
numHeight=m_nHeight/dlg.m_yPlace;
numSYWidth=m_nWidth%dlg.m_xPlace;
numSYHeight=m_nHeight%dlg.m_yPlace;
int Y,X;
int i,j,m,n;
unsigned char red,green,blue;
// storing three colors
V. there is ((m _ xPlace m _ yPlace) + residual region) small region +
for(i=0;i<dlg.m_yPlace;i++)
// height
{
Y=numHeight*i;
// obtaining Y coordinates
for(j=0;j<dlg.m_yPlace;j++)
Width// width
{
X=numWidth*j;
// obtaining X coordinates
V. get fill color +
red=m_pImage[(X+Y*m_nWidth)*3];
green=m_pImage[(X+Y*m_nWidth)*3+1];
blue=m_pImage[(X+Y*m_nWidth)*3+2];
Fill out Width and Length cycles in Small region of image cycle >
for(n=0;n<numHeight;n++)
{
for(m=0;m<numWidth*3;)
{
m_pImage[(X+Y*m_nWidth)*3+m+n*m_nWidth*3]=red;
m++;
m_pImage[(X+Y*m_nWidth)*3+m+n*m_nWidth*3]=green;
m++;
m_pImage[(X+Y*m_nWidth)*3+m+n*m_nWidth*3]=blue;
m++;
}
}
}
}
fwrite(m_pImage,m_nImage,1,fpw);
fclose(fpo);
fclose(fpw);
numPicture=2;
level=3;
Invalidate();
}
}
After sampling, although the image is divided into spatially discrete pixels, its gray scale is continuous and not yet processable by a computer, and quantization, which means converting the pixel gray scale into a continuous integer value, is also required. There are basically two methods, namely non-equidistant quantization and equidistant quantization. If the gray value distribution of the image in the black-and-white range is not uniform, a non-equal-interval quantization means can be adopted, and the basic idea is to reduce the quantization interval of the gray value range in which the gray value of the pixel in the image frequently appears and increase the quantization interval in which the gray value of the pixel rarely appears, namely, the quantization is adopted according to the principle of minimizing the total quantization error according to the probability density function of the actual gray value distribution of the image. The term "quantization at equal intervals" means that the gray scale range of a sample is simply divided at equal intervals and quantized. The quantization method can obtain the effect of small error aiming at the image with more even pixel gray value distribution in the black and white range. For the practical situation that most of our samples are extreme black and white image colors, we adopt a non-equal interval quantization method. The quantization code is as follows:
void CImageProcessingView::OnLh2(){
if(numPicture==0){
AfxMessageBox ("quantization after picture load!", MB _ OK, 0);
return;
}
AfxMessageBox ("quantization Level 2 |", MB _ OK, 0);
// open temporary Picture
FILE*fpo=fopen(BmpName,"rb");
FILE*fpw=fopen(BmpNameLin,"wb+");
// reading files
fread(&bfh,sizeof(BITMAPFILEHEADER),1,fpo);
fread(&bih,sizeof(BITMAPINFOHEADER),1,fpo);
fwrite(&bfh,sizeof(BITMAPFILEHEADER),1,fpw);
fwrite(&bih,sizeof(BITMAPINFOHEADER),1,fpw);
// malloc can only apply for 4 bytes of space
m_pImage=(BYTE*)malloc(m_nImage);
fread(m_pImage,m_nImage,1,fpo);
// level 2 quantization
for(int i=0;i<m_nImage;i++){
Per 24 bits is 3 bytes of true color image Red Green Blue
// quantization level of 2 intermediate values of 64 and 192
if(m_pImage[i]<128){
m_pImage[i]=0;
}else if(m_pImage[i]>=128){
m_pImage[i]=128;
}
}
fwrite(m_pImage,m_nImage,1,fpw);
fclose(fpo);
fclose(fpw);
numPicture=2;
level=2;
Invalidate();
}
Through in-depth research on the image denoising algorithm based on partial differential equation at present, the representative total variation image denoising algorithm is improved. The improvement work is mainly embodied in the following two aspects: 1. the self-adaptability of the algorithm, if the original total variation image denoising algorithm needs to obtain a good denoising effect, the noise variance of the image must be known, and the original algorithm model is modified, so that the denoising algorithm also has a very obvious denoising effect under the condition of unknown noise variance of the image; 2. the proposed algorithm has a more prominent noise removing effect, and inhibits the 'step effect' frequently appearing in the processing result of the original total variation algorithm, so that the image can obtain better image quality while removing noise. The basic code is as follows:
Figure BDA0002311550960000121
Figure BDA0002311550960000131
Figure BDA0002311550960000141
Figure BDA0002311550960000151
example 4:
as an optimization of the above embodiment, in step S3, the process of detecting a defective area in an image includes:
s31, performing image enhancement processing on the image, then performing gray value detection, and acquiring an image of a gray characteristic value difference area of the target to be detected;
s32, carrying out image segmentation on the image in the gray characteristic value difference area to obtain a segmented image;
s33, carrying out filtering detection on each segmented image to obtain a filtered image;
and S34, carrying out multidirectional fusion on the filtered images to obtain a defect area image.
The regional image enhancement method based on interval characteristics has the advantages that the gray value of each pixel of the yarn width or the yarn region is very similar in the image of the single yarn standard, but the defect region is not, and the difference between the two points can be further increased by enhancing the salient characteristic points in the image. The image segmentation is to use the difference of the target and the background which need to be extracted in the image on the gray characteristic value, to view the image as the combination of two regions with different gray levels, and then to select a proper threshold value to determine whether each pixel point of the image is attributed to the target or the background region, so as to generate a corresponding binary image. The corresponding codes are as follows:
Figure BDA0002311550960000161
Figure BDA0002311550960000171
Figure BDA0002311550960000181
Figure BDA0002311550960000191
then, the segmented image is subjected to unsupervised channel filtering detection, and as shown in fig. 5, a Gabor filter is used to process all the pictures to be detected, so as to obtain a group of filtered images I (x, y). When the detection is started, corresponding parameters can be calculated through the first several groups of data, in the next detection, the parameters of the sample are obtained according to the same method, and then the mean value and the standard deviation of the sample are compared with the parameters which are calculated and stored, so that the real-time dynamic detection can be carried out.
Example 5:
as an optimization of the above embodiment, in step S4, the extracted feature vector information includes yarn width information and yarn gap feature information, and the classification information includes yarn color information and transmittance information.
In step S5, the yarn width information, the yarn color information, and the yarn transmittance information are introduced into a yarn classification training model to complete classification and judgment of the yarn to be tested, and the yarn transmittance information, the yarn width information, and the yarn gap characteristic information are introduced into a single/multiple yarn training model to complete single/multiple yarn judgment of the yarn to be tested.
When the judgment result of the yarn to be detected is multiple yarns, the hooked yarn to be detected is placed back, the yarn to be detected is hooked again, and the steps S1 to S5 are repeated until the judgment result of the yarn to be detected is single yarn; and when the repeated hooking times exceed a set value and the judgment result is still multi-yarn, carrying out manual detection, and when the manual detection result is single yarn, extracting corresponding characteristic vector information to correct the single/multi-yarn training model.
In the detection of width and gap characteristics, single yarn parameters and multi-yarn parameters are divided through samples in an initial stage at the beginning, the two parameter values are set to be null, a machine is started, the single yarn parameters are determined through a data image shot for the first time, the samples are detected through a threshold value after the parameters are determined, when a sample larger than the threshold value is detected, the machine is paused for a user to determine whether the sample is multi-yarn, if the sample is multi-yarn, the multi-yarn parameters are updated, and if the sample is single yarn, the single-yarn parameters are updated. In the following detection, as long as a plurality of yarns are detected each time, a plurality of yarns are sent to be in a state of 1, then the machine returns the hooked yarns, the yarns are hooked again for detection, whether the parameter of the single yarns is set to be too small or not is judged each time when the yarns are detected to be the plurality of yarns, if the parameter is not suitable for dynamic adjustment, the machine is stopped only when three detection samples are the plurality of yarns continuously, the user operates to determine whether the yarns are the plurality of yarns or not, if the yarns are the plurality of yarns, corresponding operation is carried out, if the yarns are judged to be the plurality of yarns again, the parameter of the single yarns is updated to the value before the three detection, and the parameter of the plurality of yarns is updated again according to the characteristic of the image. If the detected multi-yarn proportion in a certain amount of yarns is smaller than a certain value, the dynamic single-yarn parameter is reduced, and the condition that the single-yarn parameter is too large and the multi-yarn is judged as the single yarn by mistake is prevented. If the detected multi-yarn ratio in a certain amount of yarns is larger than a certain value, the single-yarn parameter is dynamically increased, and the condition that the single-yarn parameter is too small and the excessive single yarn is judged as the multi-yarn is avoided.
In the process of detecting according to the transmittance, since some yarns may be completely overlapped, the width of the yarns cannot be detected in this case, but the overlapped yarns have a more obvious difference from a single yarn in the transmittance, and the image data is taken to calculate the total pixel value of the color of the yarns in the yarn area. The method comprises the steps of recording the pixel characteristics of one image when the machine is started, detecting by using the width, recording the pixel value of the image when a multi-yarn image is detected, setting a reasonable threshold value according to the difference between the single-yarn transmittance and the multi-yarn transmittance, and detecting the image in real time in the following detection.
The present invention is not limited to the above-described alternative embodiments, and various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.

Claims (10)

1. Harness wire separation detection system based on self-learning mode, characterized by including: the device comprises an image acquisition unit for acquiring a sample image of the yarn to be detected, a sample preprocessing module for preprocessing the sample image, a flaw detection module for detecting a flaw area of the preprocessed image, a feature extraction module for extracting feature vector information of the detected flaw area image, a yarn information database for storing information of each standard yarn, a yarn classification detection module for classifying and judging the yarn to be detected and single/multiple yarns according to the feature vector information and the information of each standard yarn, and a PC control terminal for performing corresponding heddle separation control according to the classification result of the yarn to be detected and the judgment result of the single/multiple yarns.
2. The self-learning mode based heddle separation detection system according to claim 1 wherein: the yarn classification detection module is provided with a yarn classification training model for performing classification judgment on the yarn to be detected and a single/multi-yarn training model for performing single/multi-yarn judgment on the yarn to be detected.
3. The self-learning mode based heddle separation detection system according to claim 1 wherein: the PC control end comprises a CPU host, a display and a hard disk memory, and is in butt joint with the image acquisition unit and used for storing and displaying the sample image acquired by the image acquisition unit.
4. The self-learning mode-based heddle separation detection system according to any one of claims 1 to 3, providing a self-learning mode-based heddle separation detection method, characterized by comprising the steps of:
s1, hooking the yarn to be detected, and collecting a sample image of the yarn to be detected;
s2, preprocessing the collected sample image;
s3, detecting the defect area of the preprocessed image, and extracting a defect area image;
s4, extracting feature vector information of the defective region image, and extracting classification information of the preprocessed image corresponding to the defective region image;
and S5, classifying the yarns to be detected and judging single/multiple yarns according to the characteristic vector information and the classification information, and performing corresponding harness wire separation control according to the judgment result.
5. The self-learning mode-based heddle separation detection method according to claim 4, characterized in that: in step S2, the process of preprocessing the sample image includes: firstly, the sample image is subjected to image data processing, then image noise reduction processing is carried out, and finally image enhancement processing is carried out.
6. The self-learning mode-based heddle separation detection method according to claim 5, characterized in that: in step S2, the process of preprocessing the sample image further includes: and carrying out mean value downsampling on the image after the enhancement processing, carrying out bilinear interpolation after the mean value downsampling, then carrying out variance downsampling, and carrying out bilinear interpolation after the variance downsampling.
7. The self-learning mode-based heddle separation detection method according to claim 4, characterized in that: in step S3, the process of detecting a defective area in an image includes:
s31, performing image enhancement processing on the image, then performing gray value detection, and acquiring an image of a gray characteristic value difference area of the target to be detected;
s32, carrying out image segmentation on the image in the gray characteristic value difference area to obtain a segmented image;
s33, carrying out filtering detection on each segmented image to obtain a filtered image;
and S34, carrying out multidirectional fusion on the filtered images to obtain a defect area image.
8. The self-learning mode-based heddle separation detection method according to claim 4, characterized in that: in step S4, the extracted feature vector information includes yarn width information and yarn gap feature information, and the classification information includes yarn color information and yarn transmittance information.
9. The self-learning mode-based heddle separation detection method according to claim 8, characterized in that: in step S5, the yarn width information, the yarn color information, and the yarn transmittance information are introduced into a yarn classification training model to complete classification and judgment of the yarn to be tested, and the yarn transmittance information, the yarn width information, and the yarn gap characteristic information are introduced into a single/multiple yarn training model to complete single/multiple yarn judgment of the yarn to be tested.
10. The self-learning mode-based heddle separation detection method according to claim 9, characterized in that: in step S5, when the determination result of the yarn to be measured is multiple yarns, the hooked yarn to be measured is placed back, the yarn to be measured is hooked again, and the steps S1 to S5 are repeated until the determination result of the yarn to be measured is single yarn; and when the repeated hooking times exceed a set value and the judgment result is still multi-yarn, carrying out manual detection, and when the manual detection result is single yarn, extracting corresponding characteristic vector information to correct the single/multi-yarn training model.
CN201911260890.9A 2019-12-10 2019-12-10 Self-learning mode-based harness wire separation detection system and method Pending CN110865084A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911260890.9A CN110865084A (en) 2019-12-10 2019-12-10 Self-learning mode-based harness wire separation detection system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911260890.9A CN110865084A (en) 2019-12-10 2019-12-10 Self-learning mode-based harness wire separation detection system and method

Publications (1)

Publication Number Publication Date
CN110865084A true CN110865084A (en) 2020-03-06

Family

ID=69658759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911260890.9A Pending CN110865084A (en) 2019-12-10 2019-12-10 Self-learning mode-based harness wire separation detection system and method

Country Status (1)

Country Link
CN (1) CN110865084A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724214A (en) * 2021-08-23 2021-11-30 唯智医疗科技(佛山)有限公司 Image processing method and device based on neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2093989A (en) * 1981-02-12 1982-09-08 Management Dev Services Ni Ltd A monitor and method of monitoring
JPH08218252A (en) * 1995-02-08 1996-08-27 Kazumitsu Miyagawa Apparatus for detecting and controlling warp-breakage of loom
CN1464922A (en) * 2001-04-25 2003-12-31 普费菲孔施陶卜里股份公司 Device and method for separating threads out of a thread layer
US20050003138A1 (en) * 2003-07-03 2005-01-06 Burlington Industries, Inc. Soiling detector for fabrics
CN101634082A (en) * 2008-07-25 2010-01-27 史托比利法费康股份有限公司 Threading machine and method for threading warp yarns in elements of a weaving machine
CN108596249A (en) * 2018-04-24 2018-09-28 苏州晓创光电科技有限公司 The method and apparatus of image characteristics extraction and classification

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2093989A (en) * 1981-02-12 1982-09-08 Management Dev Services Ni Ltd A monitor and method of monitoring
JPH08218252A (en) * 1995-02-08 1996-08-27 Kazumitsu Miyagawa Apparatus for detecting and controlling warp-breakage of loom
CN1464922A (en) * 2001-04-25 2003-12-31 普费菲孔施陶卜里股份公司 Device and method for separating threads out of a thread layer
US20050003138A1 (en) * 2003-07-03 2005-01-06 Burlington Industries, Inc. Soiling detector for fabrics
CN101634082A (en) * 2008-07-25 2010-01-27 史托比利法费康股份有限公司 Threading machine and method for threading warp yarns in elements of a weaving machine
CN108596249A (en) * 2018-04-24 2018-09-28 苏州晓创光电科技有限公司 The method and apparatus of image characteristics extraction and classification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨磊 等: "《数字媒体技术概论》" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724214A (en) * 2021-08-23 2021-11-30 唯智医疗科技(佛山)有限公司 Image processing method and device based on neural network
CN113724214B (en) * 2021-08-23 2024-02-23 唯智医疗科技(佛山)有限公司 Image processing method and device based on neural network

Similar Documents

Publication Publication Date Title
CN1167035C (en) Detection and calibration of dynamic abnormal picture element
CN109507192B (en) Magnetic core surface defect detection method based on machine vision
US5841899A (en) Specific color field recognition apparatus and method
CN109724776B (en) Method and device for determining damage degree of grate bar of sintering machine trolley
CN110880184B (en) Method and device for automatically inspecting camera based on optical flow field
CN113592868B (en) Method for detecting black and gray of glass fiber cloth cover
CN113658131A (en) Tour type ring spinning broken yarn detection method based on machine vision
CN110567969A (en) Image identification method and system for fabric defect detection
CN111665199A (en) Wire and cable color detection and identification method based on machine vision
CN114881960A (en) Feature enhancement-based cloth linear defect detection method and system
CN113785181A (en) OLED screen point defect judgment method and device, storage medium and electronic equipment
CN110865084A (en) Self-learning mode-based harness wire separation detection system and method
CN111768456A (en) Feature extraction method based on wood color space
JP2928714B2 (en) Cell activity determination method and device
CN112561875A (en) Photovoltaic cell panel coarse grid detection method based on artificial intelligence
CN115482207A (en) Bolt looseness detection method and system
CN112532938B (en) Video monitoring system based on big data technology
CN110827272B (en) Tire X-ray image defect detection method based on image processing
CN111882536A (en) Method for monitoring quantity of bulk cargo based on picture comparison
CN117522866B (en) Method for judging silk thread anchor points in fluorescent microfilament test image based on mask
CN110827252A (en) Spinning tube color identification method based on computer vision
CN111402341A (en) Camera parameter determination method and device, electronic equipment and readable storage medium
JPH06251147A (en) Video feature processing method
CN112329775B (en) Character recognition method for digital multimeter
CN113160181B (en) Stacking profile counting method based on image recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220119

Address after: 102200 room 652, unit 6, floor 5, building 20, Jiayun Park, Dongxiaokou Town, Changping District, Beijing

Applicant after: Li Shoubin

Address before: 215334 2209, floor 22, building 3, No. 1, Hongfeng Road, enterprise science park, Qianjin East Road, Kunshan Development Zone, Suzhou, Jiangsu Province

Applicant before: Lightning (Kunshan) Intelligent Technology Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200306