CN114066848B - FPCA appearance defect visual detection system - Google Patents

FPCA appearance defect visual detection system Download PDF

Info

Publication number
CN114066848B
CN114066848B CN202111352836.4A CN202111352836A CN114066848B CN 114066848 B CN114066848 B CN 114066848B CN 202111352836 A CN202111352836 A CN 202111352836A CN 114066848 B CN114066848 B CN 114066848B
Authority
CN
China
Prior art keywords
sample
value
training
fpca
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111352836.4A
Other languages
Chinese (zh)
Other versions
CN114066848A (en
Inventor
朱亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Fast Optical Technology Co ltd
Original Assignee
Suzhou Fast Optical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Fast Optical Technology Co ltd filed Critical Suzhou Fast Optical Technology Co ltd
Priority to CN202111352836.4A priority Critical patent/CN114066848B/en
Publication of CN114066848A publication Critical patent/CN114066848A/en
Application granted granted Critical
Publication of CN114066848B publication Critical patent/CN114066848B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects
    • G01N21/95623Inspecting patterns on the surface of objects using a spatial filtering method
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an FPCA appearance defect visual detection system, which relates to the field of industrial visual detection and image processing and is used for solving the problems of high labor cost, unstable personnel, low detection speed and the like in the product quality detection of an FPCA production line and the problems of poor generality, incapability of accurately identifying defect types and determining defect positions in the traditional image processing defect detection method; the invention has higher detection efficiency, saves labor, saves the cost of enterprises, improves the automation level of factories, improves the overall production efficiency, and ensures that the quality of the produced products is more stable; compared with the traditional image processing, the method can be suitable for complex and changeable FPCA products, has good universality and has good effects on the defects of smaller size and complex characteristics; and the corresponding registration end is selected through the sample training value to process the sample image in the sample, so that the training pressure of the sample training module is reduced, and the model training output efficiency is improved.

Description

FPCA appearance defect visual detection system
Technical Field
The invention relates to the field of industrial visual detection and image processing, in particular to a visual detection system for FPCA appearance defects.
Background
FPCA is a flexible circuit board with electronic components after SMT on an automatic production line, and currently the quality detection of FPCA products mainly depends on two methods of manual visual inspection and traditional image processing, and the manual detection has the defects of low efficiency, poor real-time performance, high cost and the like. With the increasing precision and complexity of industrial manufacturing, manual visual inspection and traditional image processing methods are increasingly difficult to meet the current industrial production requirements.
The traditional image processing defect detection method is generally two methods of image matching and pattern recognition, wherein the image matching is to establish a template by using good images, then match the template with test images, and verify whether defects exist according to the value obtained by matching; the pattern recognition is to obtain a potential region of the defect through methods such as image preprocessing, image filtering, image enhancement, image segmentation and the like, then extract the characteristics of the region, and further analyze and judge the region according to priori knowledge.
Disclosure of Invention
The invention aims to solve the problems of high labor cost, unstable personnel, low detection speed and the like in the product quality detection of an FPCA production line and the problems of poor generality and incapability of accurately identifying the defect type and determining the defect position in the traditional image processing defect detection method, and provides an FPCA appearance defect visual detection system.
The aim of the invention can be achieved by the following technical scheme:
a visual inspection system for FPCA appearance defects comprises a sample acquisition module, a server, a sample processing module and a sample training module; the sample acquisition module acquires RGB color images of the FPCA to obtain an FPCA whole image and sends the FPCA whole image to a server for storage;
the sample processing module is used for sending the collected FPCA whole image to the processing end for manual marking, receiving the marked FPCA whole image, marking the marked FPCA whole image as a sample image, marking all the sample images as a sample set, dividing the sample set into a training set and a test set according to the proportion, and sending the training set to the sample training module;
the sample training module is used for receiving the training set and training the training set to obtain a neural network model, then obtaining a test set, verifying the neural network model through the test set to obtain a detection result, and adjusting parameters of the neural network model through the detection result to obtain a trained and optimized FPCA image detection model; the FPCA image is detected by an FPCA image detection model, and position information and type information of defects of the FPCA are output.
As a preferred embodiment of the invention, the specific process of training by the sample training module is as follows: normalizing sample images in the training set, then carrying out image data enhancement on the normalized sample images, and generating training samples through rotating angles, adjusting saturation, adjusting exposure and adjusting tone; and clustering on a training set bbox through a sampling k-means clustering algorithm to generate a priori frame, and training through a loss function.
As a preferred embodiment of the present invention, a sample statistics unit and a sample analysis unit are further disposed in the sample training module;
the sample statistics unit is used for counting sample images in samples, when the number of the sample images is larger than a set number threshold value, subtracting the number threshold value from the number of the sample images to obtain a threshold number, selecting sample images corresponding to the threshold number and marking the sample images as partial images; converting the threshold number into a processing selection number according to a certain proportion; dividing the threshold number by the initial selection number and rounding to obtain a classification number, classifying the division images to obtain a plurality of division groups, wherein each division group consists of the classification number of division images; and sending the partial group to a sample analysis unit;
the sample analysis unit is used for transmitting the part group to an analysis end for sample training, and the specific transmitting process is as follows: sending a part signaling to the registration terminal to acquire terminal data of the registration terminal, wherein the terminal data comprises utilization rate and speed information of a processor of the registration terminal, and analyzing the terminal data to acquire a benefit value and a speed stability value of the registration terminal; obtaining the model and the blunt sharp inclined value of the registration end, setting a corresponding pre-form value for all the models, and matching the model of the registration end with all the models to obtain the corresponding pre-form value; normalizing the value of the registered end, the velocity stability value, the blunt sharp inclination value and the preformed value, taking the normalized values of the four values, and marking the values as SL1, SL2,SL3 and SL4; using the formulaObtaining a training value YF of a registration end, wherein ds1, ds2, ds3 and ds4 are preset weight factors, and the values are 1.71, 2.45, 3.6 and 1.6 respectively; sequencing the registered ends from large to small through sample training values, sequentially selecting the registered ends with the same number as the classifying number from front to back, and marking the registered ends as analysis ends; and sending the split group to an analysis end, normalizing the sample images in the split group after the analysis end receives the split group, performing image data enhancement on the normalized sample images, generating training samples by rotating angles, adjusting saturation, adjusting exposure, adjusting hue and feeding back to a sample training module.
As a preferred embodiment of the present invention, the specific process of analyzing the data at the opposite end of the sample analysis unit is:
sequentially sequencing the utilization rates according to a time sequence, calculating the inter-neighbor differences of the difference values between two adjacent utilization rates, summing all the inter-neighbor differences, and taking the average value to obtain an inter-neighbor average value; summing all the utilization rates, taking the average value of the utilization rates to obtain a utilization rate average value, carrying out normalization processing on the inter-neighbor average value and the utilization rate average value, taking the numerical values of the inter-neighbor average value and the utilization rate average value, and marking the two numerical values as JL1 and JL2 respectively; setting the weight coefficients of the inter-neighbor mean value and the utilization rate mean value as fq1 and fq2 respectively; the value of the benefit SL1 is obtained using the formula sl1=jl1×fq1+jl2×fq2; wherein, fq1 and fq2 are preset weight factors;
sequentially sorting each speed value in the speed information according to time sequence, and sequentially marking the speed values as SD1, SD2, … … and SDi; i is a positive integer n; summing all the speeds in the speed information, taking the average value to obtain a speed average value, and marking the speed average value as SDP; using the formulaObtaining a speed stabilizing value; wherein, fq3 and fq4 are preset weight factors.
As a preferred implementation mode of the invention, a feedback acquisition unit and a feedback analysis unit are also arranged in the sample training module;
the feedback acquisition unit is used for acquiring the first time of receiving the separated group at the analysis end and the second time of feeding back the training sample and sending the first time and the second time to the feedback analysis module;
the feedback analysis module is used for receiving the first moment and the second moment of the analysis end and analyzing, and the specific processing is as follows:
calculating the time difference between the first time and the second time to obtain the single feedback time length of the analysis end, and marking the current time as the recorded feedback time of the single feedback time length; sequencing all the single feedback time lengths of the analysis end according to the sequence of recording the feedback time; taking the numerical value of the single feedback time length of the record feedback time as a horizontal coordinate as a vertical coordinate and establishing a rectangular coordinate system; connecting two adjacent single access time length values between points on a rectangular coordinate system to obtain a time length line, calculating the slope of the time length line, and marking the slope of the time length line as a sharp slope if the included angle between the time length line and the abscissa is an acute angle; when the included angle between the long line and the abscissa is an obtuse angle, marking the slope of the long line as a blunt slope; summing all the values of the sharp slopes to obtain a sharp slope total value, and summing all the values of the blunt slopes to obtain a blunt slope total value; dividing the obtuse angle total value by the acute angle total value gives the obtuse acute angle value SL3.
As a preferred implementation mode of the invention, the registration login module in the server is used for submitting the terminal information of the computer terminal to the server for storage by a user through the computer terminal, meanwhile, the manager classifies the user into a general user and a privileged user, the server is in communication connection with the computer terminal of the user who is registered successfully, the computer terminal of the general user is marked as a registration end, and the computer terminal of the privileged user is marked as a processing end; wherein the authorized user is the person who performs the manual annotation.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention has higher detection efficiency, saves labor, saves the cost of enterprises, improves the automation level of factories, improves the overall production efficiency, and ensures that the quality of the produced products is more stable; compared with the traditional image processing, the method can be suitable for complex and changeable FPCA products, has good universality and has good effects on the defects of smaller size and complex characteristics.
2. According to the invention, the sample analysis unit is used for analyzing and processing the end data of the registration end, and the sample training value of the registration end is obtained by combining the interest rate value, the speed stability value, the blunt sharp slope value and the pre-form value of the registration end, and the corresponding registration end is selected through the sample training value to process the sample image in the sample, so that the training pressure of the sample training module is reduced, and meanwhile, the training output efficiency of the model is improved.
Drawings
The present invention is further described below with reference to the accompanying drawings for the convenience of understanding by those skilled in the art.
Fig. 1 is a functional block diagram of the present invention.
Detailed Description
The technical solutions of the present invention will be clearly and completely described in connection with the embodiments, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, an FPCA visual inspection system includes a sample collection module, a server, a sample processing module, and a sample training module;
the sample acquisition module acquires RGB color images of the FPCA to obtain an FPCA whole image and sends the FPCA whole image to a server for storage;
the sample processing module is used for sending the collected FPCA whole image to the processing end, and personnel corresponding to the processing end carry out manual marking, specifically: the defect position is framed by a rectangular area, the defect type is named, and the unlabeled area is a non-defective area;
the method comprises the steps of receiving an FPCA whole image after labeling, marking the FPCA whole image as a sample image, marking all the sample images as a sample set, dividing the sample set into a training set and a testing set according to a proportion, and sending the training set to a sample training module;
the sample training module is used for receiving the training set, normalizing sample images in the training set, then carrying out image data enhancement on the normalized sample images, and generating training samples through rotating angles, adjusting saturation, adjusting exposure and adjusting tone; clustering on a training set bbox through a sampling k-means clustering algorithm to generate a priori frame, training through a loss function to obtain a neural network model, and sending the model into a server;
then obtaining a test set, verifying the neural network model through the test set to obtain a detection result, and regulating parameters of the neural network model through the detection result to obtain a trained and optimized FPCA image detection model; detecting an FPCA image through an FPCA image detection model, and outputting position information and type information of defects of the FPCA;
the sample training module comprises a sample statistics unit, a sample analysis unit, a feedback acquisition unit and a feedback analysis unit;
the sample statistics unit is used for counting sample images in samples, when the number of the sample images is larger than a set number threshold value, subtracting the number threshold value from the number of the sample images to obtain a threshold number, selecting sample images corresponding to the threshold number and marking the sample images as partial images; converting the threshold number into a processing selection number according to a certain proportion; dividing the threshold number by the initial selection number and rounding to obtain a classification number, classifying the division images to obtain a plurality of division groups, wherein each division group consists of the classification number of division images; and sending the partial group to a sample analysis unit;
the sample analysis unit is used for transmitting the part group to an analysis end for sample training, and the specific transmitting process is as follows: sending a signaling to the registration end to acquire end data of the registration end, wherein the end data comprises utilization rate and speed information of a processor of the registration end, and analyzing the end data to obtain a benefit value and a speed stability value of the registration end, specifically: the specific process of analyzing the data at the opposite end is as follows:
sequentially sequencing the utilization rates according to a time sequence, calculating the inter-neighbor differences of the difference values between two adjacent utilization rates, summing all the inter-neighbor differences, and taking the average value to obtain an inter-neighbor average value; summing all the utilization rates, taking the average value of the utilization rates to obtain a utilization rate average value, carrying out normalization processing on the inter-neighbor average value and the utilization rate average value, taking the numerical values of the inter-neighbor average value and the utilization rate average value, and marking the two numerical values as JL1 and JL2 respectively; setting the weight coefficients of the inter-neighbor mean value and the utilization rate mean value as fq1 and fq2 respectively; the value of the benefit SL1 is obtained using the formula sl1=jl1×fq1+jl2×fq2; wherein, fq1 and fq2 are preset weight factors; fq1 and fq2 have values of 1.2 and 1.6;
sequentially sorting each speed value in the speed information according to time sequence, and sequentially marking the speed values as SD1, SD2, … … and SDi; i is a positive integer n; summing all the speeds in the speed information, taking the average value to obtain a speed average value, and marking the speed average value as SDP; using the formulaObtaining a speed stabilizing value; wherein, fq3 and fq4 are preset weight factors; fq3 and fq4 have values of 0.74 and 0.26;
obtaining the model and the blunt sharp inclined value of the registration end, setting a corresponding pre-form value for all the models, and matching the model of the registration end with all the models to obtain the corresponding pre-form value; normalizing the value of the availability, the speed stability, the blunt sharp inclination and the preformed value of the registration end, taking the normalized values of the four values, and marking the values as SL1, SL2, SL3 and SL4 in sequence;
using the formulaObtaining a training value YF of a registration end, wherein ds1, ds2, ds3 and ds4 are preset weight factors, and the values are 1.71, 2.45, 3.6 and 1.6 respectively; sequencing the registered ends from large to small through sample training values, sequentially selecting the registered ends with the same number as the classifying number from front to back, and marking the registered ends as analysis ends; sending the split group to an analysis end, normalizing the sample images in the split group after the split group is received by the analysis end, and thenCarrying out image data enhancement on the normalized sample image, generating a training sample by rotating angle, adjusting saturation, adjusting exposure and adjusting hue, and feeding back to a sample training module;
analyzing and processing the end data of the registration end through a sample analysis unit, combining the interest rate value, the speed stability value, the blunt sharp inclined value and the pre-form value of the registration end to obtain a sample training value of the registration end, selecting a corresponding registration end through the sample training value to process a sample image in a sample, reducing training pressure of a sample training module, and improving training output efficiency of the model;
the feedback acquisition unit is used for acquiring the first time when the analysis end receives the division group and the second time when the training sample is fed back and sending the first time and the second time to the feedback analysis module;
the feedback analysis module is used for receiving the first moment and the second moment of the analysis end and analyzing and processing, and the specific processing is as follows: calculating the time difference between the first time and the second time to obtain the single feedback time length of the analysis end, and marking the current time as the recorded feedback time of the single feedback time length; sequencing all the single feedback time lengths of the analysis end according to the sequence of recording the feedback time; taking the numerical value of the single feedback time length of the record feedback time as a horizontal coordinate as a vertical coordinate and establishing a rectangular coordinate system; connecting two adjacent single access time length values between points on a rectangular coordinate system to obtain a time length line, calculating the slope of the time length line, and marking the slope of the time length line as a sharp slope if the included angle between the time length line and the abscissa is an acute angle; when the included angle between the long line and the abscissa is an obtuse angle, marking the slope of the long line as a blunt slope; summing all the values of the sharp slopes to obtain a sharp slope total value, and summing all the values of the blunt slopes to obtain a blunt slope total value; dividing the obtuse angle total value by the acute angle total value to obtain a obtuse acute angle value SL3;
the server comprises a registration login module and a database;
the registration login module is used for submitting terminal information of the computer terminal to a database for storage by a user through the computer terminal, classifying the user into a general user and a privileged user by a manager, connecting a server with the computer terminal of the user which is successfully registered in a communication way, marking the computer terminal of the general user as a registration terminal, and marking the computer terminal of the privileged user as a processing terminal; wherein the authorized user is the person who carries out manual labeling;
when the invention is used, the FPCA defect sample is collected, then the RGB color image of the FPCA is collected and stored;
and (3) manually marking the whole FPCA image, framing the defect position by using a rectangular area, naming the defect type, and taking the unlabeled area as a defect-free area.
Sample set was set at 9: the method comprises the steps of 1 dividing the marked training set into a training set and a testing set, training the marked training set on a GPU according to a dark frame and a yolone network to obtain a trained neural network model, then testing and verifying the model by using the testing set, verifying the detection effect of the model, returning to adjust the parameters of the network according to the effect, and comprehensively evaluating the accuracy and recall rate of the model to achieve the purposes of training and optimizing the model.
The training process will initially normalize the image to normalize the input image to 416 x 416, so that an image of any size can be input, making the overall network more flexible to use.
The training process can enhance the image data initially, and generates more training samples by rotating angle, adjusting saturation, adjusting exposure and adjusting tone, so that the training data is enriched, and the training result is better.
In the training process, clustering is carried out on the training set bbox through a sampling k-means clustering algorithm to generate a proper priori frame, so that the training process can be more quickly converged, and the training purpose is accelerated.
The loss function used in the training process is:
wherein the method comprises the steps ofRepresentation ofWhether or not the object is present in grid cell i +.>The j-th bounding box predictor in grid cell i is denoted as "responsible" for this prediction;
for the first iteration cycle, the learning rate is slowly changed from 10 -3 Up to 10 -2 If starting from a high learning rate, the model tends to diverge due to an unstable gradient. Continuing with 10 -2 Training 75 iteration cycles with a learning rate of 10 -3 Training for 30 iteration cycles, finally using 10 -4 Training the learning rate of 30 iteration cycles;
and outputting in multiple scales, wherein 3 boxes are predicted in each scale, clustering is used in a design mode of an anchor, 9 clustering centers are obtained, and the clustering centers are uniformly distributed to 3 scales according to the size. Scale 1. Adding some convolution layers after the base network and outputting box information. Scale 2-up-sampling (x 2) from the convolution layer of the penultimate layer in scale 1 is added to the last 16x16 size feature map, and the box information is output after a plurality of convolutions, which is twice as large as scale 1. Scale 3 similar to scale 2, a 32x32 feature map is used. Thus, the method has good effect on the defect of smaller size;
detecting the FPCA image by using the trained model, and outputting the position information and the type information of the defect of the FPCA;
compared with manual visual inspection, the invention has the advantages of higher efficiency, labor saving, enterprise cost saving, improvement of the automation level of a factory, improvement of the whole production efficiency and more stable quality of the produced product; compared with the traditional image processing, the method can be suitable for complex and changeable FPCA products, has good universality and has good effects on the defects of smaller size and complex characteristics;
the preferred embodiments of the invention disclosed above are intended only to assist in the explanation of the invention. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof.

Claims (3)

1. A visual inspection system for FPCA appearance defects comprises a sample acquisition module, a server, a sample processing module and a sample training module; the system is characterized in that the sample acquisition module acquires RGB color images of the FPCA to obtain an FPCA whole image and sends the FPCA whole image to a server for storage;
the sample processing module is used for sending the collected FPCA whole image to the processing end for manual marking, receiving the marked FPCA whole image, marking the marked FPCA whole image as a sample image, marking all the sample images as a sample set, dividing the sample set into a training set and a test set according to the proportion, and sending the training set to the sample training module;
the sample training module is used for receiving the training set and training the training set to obtain a neural network model, then obtaining a test set, verifying the neural network model through the test set to obtain a detection result, and adjusting parameters of the neural network model through the detection result to obtain a trained and optimized FPCA image detection model; detecting an FPCA image through an FPCA image detection model, and outputting position information and type information of defects of the FPCA;
the specific process of training by the sample training module is as follows: normalizing sample images in the training set, then carrying out image data enhancement on the normalized sample images, and generating training samples through rotating angles, adjusting saturation, adjusting exposure and adjusting tone; clustering on a training set bbox through a sampling k-means clustering algorithm to generate a priori frame, and training through a loss function;
the sample training module is also internally provided with a sample statistics unit and a sample analysis unit;
the sample statistics unit is used for counting sample images in samples, when the number of the sample images is larger than a set number threshold value, subtracting the number threshold value from the number of the sample images to obtain a threshold number, selecting sample images corresponding to the threshold number and marking the sample images as partial images; converting the threshold number into a processing selection number according to a certain proportion; dividing the threshold number by the initial selection number and rounding to obtain a classification number, classifying the division images to obtain a plurality of division groups, wherein each division group consists of the classification number of division images; and sending the partial group to a sample analysis unit;
the sample analysis unit is used for transmitting the part group to an analysis end for sample training, and the specific transmitting process is as follows: sending a part signaling to the registration terminal to acquire terminal data of the registration terminal, wherein the terminal data comprises utilization rate and speed information of a processor of the registration terminal, and analyzing the terminal data to acquire a benefit value and a speed stability value of the registration terminal; obtaining the model and the blunt sharp inclined value of the registration end, setting a corresponding pre-form value for all the models, and matching the model of the registration end with all the models to obtain the corresponding pre-form value; normalizing the benefit value, the speed stabilizing value, the blunt sharp inclined value and the preformed value of the registering end to obtain a sample training value of the registering end; sequencing the registered ends from large to small through sample training values, sequentially selecting the registered ends with the same number as the classifying number from front to back, and marking the registered ends as analysis ends; the method comprises the steps of sending a division group to an analysis end, normalizing sample images in the division group after the analysis end receives the division group, carrying out image data enhancement on the normalized sample images, generating training samples through rotation angle, saturation adjustment, exposure adjustment and tone adjustment, and feeding back to a sample training module;
the specific process of analyzing the data at the opposite end of the sample analysis unit is as follows:
sequentially sequencing the utilization rates according to a time sequence, calculating the inter-neighbor differences of the difference values between two adjacent utilization rates, summing all the inter-neighbor differences, and taking the average value to obtain an inter-neighbor average value; the average value of the utilization rate is obtained by summing all the utilization rates and taking the average value of the utilization rates, and the average value of the m-o-average value and the average value of the utilization rate are normalized to obtain a utilization rate value;
sequentially sequencing each speed value in the speed information according to a time sequence, summing all the speeds in the speed information, and taking an average value to obtain a speed average value; and analyzing the speed average value and the speed value to obtain a speed stability value.
2. The visual inspection system of FPCA appearance defects according to claim 1, wherein a feedback acquisition unit and a feedback analysis unit are further provided in the sample training module;
the feedback acquisition unit is used for acquiring the first time of receiving the separated group at the analysis end and the second time of feeding back the training sample and sending the first time and the second time to the feedback analysis module;
the feedback analysis module is used for receiving the first moment and the second moment of the analysis end and analyzing, and the specific processing is as follows:
calculating the time difference between the first time and the second time to obtain the single feedback time length of the analysis end, and marking the current time as the recorded feedback time of the single feedback time length; sequencing all the single feedback time lengths of the analysis end according to the sequence of recording the feedback time; taking the numerical value of the single feedback time length of the record feedback time as a horizontal coordinate as a vertical coordinate and establishing a rectangular coordinate system; connecting two adjacent single access time length values between points on a rectangular coordinate system to obtain a time length line, calculating the slope of the time length line, and marking the slope of the time length line as a sharp slope if the included angle between the time length line and the abscissa is an acute angle; when the included angle between the long line and the abscissa is an obtuse angle, marking the slope of the long line as a blunt slope; summing all the values of the sharp slopes to obtain a sharp slope total value, and summing all the values of the blunt slopes to obtain a blunt slope total value; dividing the blunt-slope total value by the sharp-slope total value to obtain a blunt-sharp-slope value.
3. The visual inspection system of FPCA appearance defects according to claim 1, wherein a registration login module is provided in the server, the registration login module is used for a user to submit terminal information of the computer terminal through the computer terminal and send the terminal information to the server for storage, meanwhile, a manager classifies the user into a general user and a privileged user, the server is in communication connection with the computer terminal of the user who is successfully registered, the computer terminal of the general user is marked as a registration end, and the computer terminal of the privileged user is marked as a processing end; wherein the authorized user is the person who performs the manual annotation.
CN202111352836.4A 2021-11-16 2021-11-16 FPCA appearance defect visual detection system Active CN114066848B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111352836.4A CN114066848B (en) 2021-11-16 2021-11-16 FPCA appearance defect visual detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111352836.4A CN114066848B (en) 2021-11-16 2021-11-16 FPCA appearance defect visual detection system

Publications (2)

Publication Number Publication Date
CN114066848A CN114066848A (en) 2022-02-18
CN114066848B true CN114066848B (en) 2024-03-22

Family

ID=80272694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111352836.4A Active CN114066848B (en) 2021-11-16 2021-11-16 FPCA appearance defect visual detection system

Country Status (1)

Country Link
CN (1) CN114066848B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549512A (en) * 2022-03-01 2022-05-27 成都数之联科技股份有限公司 Circuit board defect detection method, device, equipment and medium
CN114550920B (en) * 2022-03-09 2023-02-07 曜立科技(北京)有限公司 Valve state detection diagnosis decision system based on data analysis
CN117782189A (en) * 2022-09-20 2024-03-29 无锡芯享信息科技有限公司 Automatic environment real-time detection system for semiconductor manufacturing factory
CN117030724B (en) * 2023-10-09 2023-12-08 诺比侃人工智能科技(成都)股份有限公司 Multi-mode industrial defect analysis method and system based on deep learning

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105982368A (en) * 2015-02-05 2016-10-05 陈舟顺 Children back clothes
CN109064461A (en) * 2018-08-06 2018-12-21 长沙理工大学 A kind of detection method of surface flaw of steel rail based on deep learning network
CN110826416A (en) * 2019-10-11 2020-02-21 佛山科学技术学院 Bathroom ceramic surface defect detection method and device based on deep learning
CN111862067A (en) * 2020-07-28 2020-10-30 中山佳维电子有限公司 Welding defect detection method and device, electronic equipment and storage medium
CN112580540A (en) * 2020-12-23 2021-03-30 安徽高哲信息技术有限公司 Artificial intelligent crop processing system and method
CN112926685A (en) * 2021-03-30 2021-06-08 济南大学 Industrial steel oxidation zone target detection method, system and equipment
CN113592866A (en) * 2021-09-29 2021-11-02 西安邮电大学 Semiconductor lead frame exposure defect detection method
CN114187505A (en) * 2021-11-15 2022-03-15 南方电网科学研究院有限责任公司 Detection method and device for falling-off of damper of power transmission line, medium and terminal equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105982368A (en) * 2015-02-05 2016-10-05 陈舟顺 Children back clothes
CN109064461A (en) * 2018-08-06 2018-12-21 长沙理工大学 A kind of detection method of surface flaw of steel rail based on deep learning network
CN110826416A (en) * 2019-10-11 2020-02-21 佛山科学技术学院 Bathroom ceramic surface defect detection method and device based on deep learning
CN111862067A (en) * 2020-07-28 2020-10-30 中山佳维电子有限公司 Welding defect detection method and device, electronic equipment and storage medium
CN112580540A (en) * 2020-12-23 2021-03-30 安徽高哲信息技术有限公司 Artificial intelligent crop processing system and method
CN112926685A (en) * 2021-03-30 2021-06-08 济南大学 Industrial steel oxidation zone target detection method, system and equipment
CN113592866A (en) * 2021-09-29 2021-11-02 西安邮电大学 Semiconductor lead frame exposure defect detection method
CN114187505A (en) * 2021-11-15 2022-03-15 南方电网科学研究院有限责任公司 Detection method and device for falling-off of damper of power transmission line, medium and terminal equipment

Also Published As

Publication number Publication date
CN114066848A (en) 2022-02-18

Similar Documents

Publication Publication Date Title
CN114066848B (en) FPCA appearance defect visual detection system
CN109239102B (en) CNN-based flexible circuit board appearance defect detection method
CN109064454A (en) Product defects detection method and system
CN109584227A (en) A kind of quality of welding spot detection method and its realization system based on deep learning algorithm of target detection
CN108597053A (en) Shaft tower and channel targets identification based on image data and neural network and defect diagnostic method
CN109767422A (en) Pipe detection recognition methods, storage medium and robot based on deep learning
CN108711148B (en) Tire defect intelligent detection method based on deep learning
CN111680603A (en) Dish detection and identification method
CN112949517B (en) Plant stomata density and opening degree identification method and system based on deep migration learning
CN116188475A (en) Intelligent control method, system and medium for automatic optical detection of appearance defects
CN109472280A (en) A kind of method, storage medium and electronic equipment updating species identification model library
CN111161237A (en) Fruit and vegetable surface quality detection method, storage medium and sorting device thereof
CN110427943A (en) A kind of intelligent electric meter technique for partitioning based on R-CNN
CN114359235A (en) Wood surface defect detection method based on improved YOLOv5l network
CN114359199A (en) Fish counting method, device, equipment and medium based on deep learning
CN113158969A (en) Apple appearance defect identification system and method
CN111507249A (en) Transformer substation nest identification method based on target detection
CN110097603B (en) Fashionable image dominant hue analysis method
CN115019294A (en) Pointer instrument reading identification method and system
CN110555384A (en) Beef marbling automatic grading system and method based on image data
CN111243373A (en) Panoramic simulation teaching system
CN116486177A (en) Underwater target identification and classification method based on deep learning
CN116521917A (en) Picture screening method and device
CN115631488A (en) Jetson Nano-based fruit maturity nondestructive testing method and system
CN114005054A (en) AI intelligence system of grading

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220921

Address after: 215000 No.36, Zhiying street, high tech Zone, Suzhou City, Jiangsu Province

Applicant after: Suzhou Shiqing Electronic Technology Co.,Ltd.

Address before: Room 1116-3, Suzhou Taihu science and Technology Industrial Park, No. 18, Longshan South Road, Guangfu Town, Wuzhong District, Suzhou, Jiangsu 215000

Applicant before: Suzhou Lihao Intelligent Technology Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240223

Address after: No. 36 Zhiying Street, Suzhou High tech Zone, Suzhou City, Jiangsu Province, 215000

Applicant after: Suzhou fast optical technology Co.,Ltd.

Country or region after: China

Address before: 215000 No.36, Zhiying street, high tech Zone, Suzhou City, Jiangsu Province

Applicant before: Suzhou Shiqing Electronic Technology Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant