CN113838034B - Quick detection method for surface defects of candy package based on machine vision - Google Patents

Quick detection method for surface defects of candy package based on machine vision Download PDF

Info

Publication number
CN113838034B
CN113838034B CN202111137495.9A CN202111137495A CN113838034B CN 113838034 B CN113838034 B CN 113838034B CN 202111137495 A CN202111137495 A CN 202111137495A CN 113838034 B CN113838034 B CN 113838034B
Authority
CN
China
Prior art keywords
model
detection
image
defect
candy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111137495.9A
Other languages
Chinese (zh)
Other versions
CN113838034A (en
Inventor
杜世昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dynamics Industrial Intelligent Technology Suzhou Co ltd
Original Assignee
Dynamics Industrial Intelligent Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dynamics Industrial Intelligent Technology Suzhou Co ltd filed Critical Dynamics Industrial Intelligent Technology Suzhou Co ltd
Priority to CN202111137495.9A priority Critical patent/CN113838034B/en
Publication of CN113838034A publication Critical patent/CN113838034A/en
Application granted granted Critical
Publication of CN113838034B publication Critical patent/CN113838034B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Quality & Reliability (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a quick detection method for a candy packaging defect based on machine vision, which comprises the following steps: (1) image acquisition; (2) Image processing, namely a defect rapid detection algorithm based on deep learning in an image processing system, wherein the algorithm is an improved VGG16 deep learning model, and is obtained by preprocessing, training, evaluating and optimizing a real candy data set on a production line, and when the defect is detected, a defective product signal is sent out; (3) Defective product rejection, the rejection system receives defective product signals and performs rejection operation. The invention realizes the on-line detection and elimination of candy packaging defects, and the image acquisition rate, the reject identification rate and the elimination rate can all meet the production rate of 10 candy per second, thereby greatly improving the defect detection efficiency and reliability while solving the problems of low manual detection precision and high cost; the method is easy to expand to the defect online detection in other fields, such as online detection of packaging defects of type products and online detection of defects of mobile phone shells.

Description

Quick detection method for surface defects of candy package based on machine vision
Technical Field
The invention belongs to the field of artificial intelligence technology and quick detection of product external packing defects based on machine vision, and particularly relates to a quick detection method of surface defects of candy packages.
Background
In modern automatic production lines, defects may exist on the surface of candy packaging, the product is an unqualified product, and reducing the flow of the unqualified product into a market can bring good sensory experience to consumers and increase the competitiveness of the product. The discharging speed of the candy packaging production line is generally fast and reaches 0.1 s/granule, and in the packaging process, the quality problems such as package damage, empty bags, folds and the like can be caused due to mechanical vibration or other environmental factors. The traditional detection method is to check whether the package has defects by using an artificial visual method, but the artificial detection has the problems of low efficiency, low accuracy and high cost. The above-mentioned problems can be solved by on-line detection according to the machine vision principle. Machine vision is an integrated technology that includes image processing, mechanical engineering, control, electrical light source illumination, optical imaging, sensors, analog and digital video technology, computer software and hardware technology (image enhancement and analysis algorithms, image cards, I/O cards, etc.). A typical machine vision application system includes image capture, a light source system, an image digitizing module, a digital image processing module, an intelligent decision making module, and a machine control execution module.
The on-line judging of the candy packaging defects needs to be matched with the real-time production speed of the production line and meanwhile a certain accuracy is guaranteed, so that on-line detection has certain requirements on visual devices, software algorithms and rejection mechanisms. Firstly, in order to meet real-time detection, trigger conditions for photographing products need to be set, and the resolution and the image transmission rate of an industrial camera need to be considered. The recognition of the image adopts a deep convolution network, the deep learning is the internal law and the representation level of the learning sample data, and the information obtained in the learning process is greatly helpful to the interpretation of data such as characters, images, sounds and the like. Its final goal is to have the machine have analytical learning capabilities like a person, and to recognize text, image, and sound data. Deep learning is a complex machine learning algorithm that achieves far greater results in terms of speech and image recognition than prior art.
The invention provides a quick detection method for candy packaging defects based on machine vision, which is characterized in that the method is based on a quick detection algorithm for defects based on deep learning, the detection time of single candy is less than 0.1s, and the on-line detection of candy packaging defects and defective product rejection are realized by matching a hardware system.
Disclosure of Invention
1. Object of the invention
The invention aims to provide a method for rapidly detecting packaging defects of candies, which realizes on-line detection and rejection of candies with packaging defects.
2. The invention adopts the technical proposal that
The invention provides a quick detection method for a candy packaging defect based on machine vision, which comprises the following steps:
step 1: image acquisition
Adopting a color area array camera to be matched with a planar shadowless light source for lighting shooting to obtain a candy surface image moving at a high speed, and storing the image and caching the image;
step 2: image processing
The method comprises the steps of reading images based on a defect rapid detection algorithm of deep learning, and sending defective products signals to a rejecting system when defective candies are packaged;
step 2.1: image preprocessing, namely converting data format, adjusting image size, dividing data set and enhancing data;
step 2.2: constructing a defect rapid detection algorithm and training a model, inputting the preprocessed data into a defect recognition network, and training a defect detection model to obtain a candy package defect classification result;
step 2.3: performing performance evaluation on the model obtained after training;
step 2.4: model optimization, and further optimizing the model by combining the result of the evaluation in the step 2.3;
step 3: defective product rejection
And receiving the defective product signals, enabling the rejecting system to rapidly react, and overturning and rejecting the defective products.
Preferably, in the step 1, a color area array industrial camera is adopted, and different types of product packages are detected by adjusting the vertical direction position; the used planar shadowless light source can effectively eliminate the reflection generated by the aluminum plastic package, and improve the accuracy of defect detection and classification; the infrared correlation sensor sensitively captures the passing of the candy and triggers photographing.
Preferably, in the step 2.1, the converting data format and the size adjusting process are converting the format of the image dataset into a data input format facing the Keras frame, the original format of the image obtained in the industrial camera is BYTE, the original format of the image can be input into the neural network for processing after being converted into a Mat matrix, the pixel size of the image obtained by direct shooting is 1000×1000×3, and in order to improve the image processing speed and the recognition speed, the image size is adjusted to 150×150×3 by image interpolation; the data set dividing process is to divide the data set into a training set and a testing set according to the proportion of 8:2; the data enhancement process is to rotate, translate, overturn, scale transform, zoom and randomly cut the images in the training set, expand the training set scale and enhance the generalization capability of the model.
Preferably, in the step 2.2, the input image is classified and detected by a deep learning neural network, a VGG16 convolution base portion is used as a pre-training network for feature extraction, a full-connection layer is added as a classifier to form a defect recognition network, a model is trained, a trunk convolution network is frozen first, a training set training classifier is used, and super parameters are adjusted; then loading a trained model, thawing a convolution block close to the classifier, and retraining the model-thawed convolution layer and the classifier by using a training set to further improve the accuracy of the model; the final model is stored in a file folder where the model is located in a.h5 file form; super parameters include learning rate, batch size, and iteration number; the training process of the VGG16 model is the process of weight updating, and the weight updating comprises the following activation functions, loss functions and optimizers; the Relu activation function is selected in the convolution layer, and the function formula is as follows:
f(x)=max(0,x)
wherein x is the output of each layer of the neural network; the full-connection layer adopts a Softmax function to display the multi-classification result in a probability form, and the function formula is as follows:
the last layer of VGG16 neural network outputs layer C neurons, wherein W y Weight of the last layer of the y-th neuron, W c Weights for each neuron; categorical crossentropy classification cross entropy function is selected as a loss function, and the steps are as follows:
n is the number of samples, m is the number of classifications, y im To determine the probability that the input belongs to class m,for inputting the category to which the input actually belongs, the variable is 0 or 1; the cross entropy function is a multi-output loss function, and the loss value of the cross entropy function is also a plurality of loss values;
using RMSprop as an optimizer, limiting oscillations in the vertical direction, allowing the model to converge rapidly, the algorithm equation is as follows:
v dw =β·v dw +(1-β)·dw 2
v db =β·v dw +(1-β)·db 2
dW and db are the differentiation under mini-batch, v dW And v db For an exponentially weighted average, β represents the momentum value, set to 0.9; parameter e for v prevention dw The weight explosion and gradient rise caused by approaching 0, and alpha is the super parameter.
Preferably, in step 2.3, the defect classification accuracy evaluation process uses a test set that does not participate in training as the data set for evaluation, and the final model is excellent in the test set; specifically, precision and Recall are used as evaluation indexes; the accurate rate is measured by predicting the correct proportion in all the results predicted as positive examples; the recall measures how many actual positive examples are predicted as positive examples by the model; TP is the number of real cases, TN is the number of true cases, FP is the number of false positive cases, FN is the number of false negative cases;
preferably, in step 2.3, the detection speed is evaluated, after the model needs to be stably operated for a period of time, the detection speed of a single picture is used as an evaluation index, and the output of candy per minute of a single machine in a production line is 500, so that the detection duration of the single picture needs to be lower than 0.1s; the model needs to be loaded when the model is operated for the first time, so that the single picture detection time of the model which is operated for the second time and is operated stably is used as an evaluation index for reasonably evaluating the good and bad of the model, and the model is acceptable if the evaluation index is lower than 0.1 s.
Preferably, in step 2.4, the evaluation index in step 2.3 is analyzed, and the detection accuracy and the detection speed are a pair of indexes which are mutually restricted, so that the detection accuracy needs to be improved on the premise of ensuring the detection speed, so as to meet the real-time performance and higher accuracy of detection.
A candy packaging defect rapid detection device based on machine vision comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the method steps when executing the computer program.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the method steps.
3. The invention has the beneficial effects that
(1) The invention designs an on-line shooting visual hardware system triggered by a sensor, adopts a color area array industrial camera to clearly acquire the surface image of the candy moving at high speed, adopts a planar shadowless light source to effectively eliminate the reflection generated by aluminum plastic packaging, and can match the actual production speed of 2m/s of a candy production line by detection and rejection
(2) In order to better extract the defect characteristics of the candy pictures, a classifier composed of two full-connection layers is added based on a convolution base part of VGG16 as a characteristic extraction network, the classifier is trained by a candy picture training set, partial convolution blocks are thawed, and fine adjustment is carried out on the model by using the same candy training set, so that a model more suitable for candy defect classification is obtained.
(3) The data enhancement technology utilized by the invention can enrich the distribution of training data, improve the generalization capability and robustness of the model and prevent overfitting, and solve the problem of poor model performance caused by insufficient overall candy defective product samples.
(4) And the software and hardware data are interacted, the detection and the rejection are automatic, and the manpower is liberated. The defect product can be traced by the user interaction interface, the generation of the defect product is reduced from the source, the missing detection and the false detection are avoided, the labor degree of operators is reduced, and the automation degree of the production line is improved.
Drawings
Fig. 1 is a schematic diagram of a method for detecting defects in a candy package.
Fig. 2 is a schematic diagram of a structure of a candy package defect detecting apparatus.
FIG. 3 is a flow chart of the defect detection algorithm construction.
FIG. 4 is a diagram of a defect detection network.
Detailed Description
The following description of the embodiments of the present invention will be made more apparent and fully by reference to the accompanying drawings, in which embodiments of the invention are shown, and in which it is evident that the embodiments shown are only some, but not all embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be within the scope of the present invention.
Examples of the present invention will be described in further detail below with reference to the accompanying drawings.
Example 1
Referring to fig. 1 and 2, a method for quickly detecting defects of candy packages based on machine vision is implemented as follows:
step 1: and (5) image acquisition.
Specifically, after the candy is packaged in the production line, the candy enters the image acquisition unit through the conveyor belt, reaches the position right below the plane shadowless light source, triggers the infrared sensor, and accordingly sends a signal to the industrial camera to trigger photographing and is stored in the camera memory.
Step 2: and (5) image processing.
The image processing software automatically makes a judgment after receiving the picture in the processor memory, and if the picture is judged to be unqualified, the image processing software rapidly transmits the unqualified signal to the rejecting mechanism.
Step 3: and (5) removing defective products.
After receiving the defective product signals, the PLC transmits signals to the turnover device, the PLC controls the turnover device to remove defective products, and the defective products enter the next working procedure.
Referring to fig. 3, the core of the present invention is a fast detection algorithm for package defects based on deep learning, comprising the following steps:
step 2.1: and (5) preprocessing an image.
Specifically, a file path of an acquired image is taken as a parameter, and a batch data input format for the Keras frame is obtained after data promotion and normalization, wherein the size of input pictures is adjusted to 150×150×3, and the batch size is 10. The data set was divided into 400 training sets (80 for each of the 5 types) and 100 test sets (20 for each of the 5 types). And randomly performing data enhancement operations such as rotation, translation, overturning, scale transformation, scaling, cutting and the like on the training set image, wherein the total number of images of the training set generated after enhancement is 1800.
Step 2.2: and (5) training a defect identification model.
The defect recognition network related to the invention adopts a Windows operating system, programming language is Python, and the deep learning framework Keras is operated. Firstly, a fully connected layer (classifier) is added on the basis of taking a VGG16 convolution layer as a pre-training model to form a defect detection network, as shown in fig. 4, and training of a deep learning model is carried out, specifically: the feature is extracted by using a VGG16 backbone convolution network, the classifier is trained for the first time on a candy defect training set and super-parameters are adjusted, the learning rate is set to be 0.00002 when the super-parameters are adjusted, the batch size is set to be 10, and the iteration number is set to be 20. And then the main convolutional neural network is unfrozen and is close to the classifier convolutional block 5, and the classifier and the unfrozen partial convolutional layer are subjected to fine tuning by using the same training set. And storing the trained defect classification model in a file folder where the model is located in a.h5 file form.
Step 2.3: the trained models were evaluated on a candy defect test set. For the detection precision, precision and recall are calculated as evaluation indexes, and TP (real example), FP (false positive example), FN (false negative example) and TN (true negative example) values are calculated first, and the precision and recall are calculated by the following formulas:
and after the model stably operates for a period of time, outputting the detection speed of the single picture as a performance evaluation index.
In this process, a total of 100 pictures were used as test sets, which included 40 normal products (front, back) and 60 defective products (continuous, bare, broken seal). The average accuracy is calculated as: 84.4% of the average recall ratio: 84.4%, wherein the recall rate of the dummy seal is lower, which is 63.5%. And the detection speed of the single picture output after the model stably operates is 0.25s.
Step 2.4: and (3) further optimizing the model by combining the evaluation result of the step 2.3. Firstly, the primary condition is that the real-time requirement of the production line is met, the dropout layer with the parameter of 0.5 is added in the full-connection layer, the model parameter is reduced, the model scale is reduced, the single prediction time is shortened from 0.25s to 0.07s, and the real-time requirement of the production line is met. Secondly, 50 virtual seal defect picture data sets are shot again under the condition that the average value of detection accuracy is lower due to low virtual seal recall rate, the data sets are enhanced to 250, a training set retraining model is added, the virtual seal recall rate is finally increased to 83.2%, the average recall rate is increased to 88.3%, and the detection accuracy requirement is met.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (8)

1. A quick detection method for a candy packaging defect based on machine vision is characterized by comprising the following steps:
step 1: image acquisition
Adopting a color area array camera to be matched with a planar shadowless light source for lighting shooting to obtain a candy surface image moving at a high speed, and storing the image and caching the image;
step 2: image processing
The method comprises the steps of reading images based on a defect rapid detection algorithm of deep learning, and sending defective products signals to a rejecting system when defective candies are packaged;
step 2.1: image preprocessing, namely converting data format, adjusting image size, dividing data set and enhancing data;
step 2.2: constructing a defect rapid detection algorithm and training a model, inputting the preprocessed data into a defect recognition network, and training a defect detection model to obtain a candy package defect classification result;
in the step 2.2, the input image is classified and detected through a deep learning neural network, a VGG16 convolution base part is used as a pre-training network for feature extraction, a full-connection layer is added as a classifier to form a defect recognition network, a training model process is carried out, a trunk convolution network is frozen first, a training set is used for training the classifier, and super parameters are adjusted; then loading a trained model, thawing a convolution block close to the classifier, and retraining the model-thawed convolution layer and the classifier by using a training set to further improve the accuracy of the model; the final model is stored in a file folder where the model is located in a.h5 file form; super parameters include learning rate, batch size, and iteration number; the training process of the VGG16 model is the process of weight updating, and the weight updating comprises the following activation functions, loss functions and optimizers; the Relu activation function is selected in the convolution layer, and the function formula is as follows:
f(x)=max(0,x)
wherein x is the output of each layer of the neural network; the full-connection layer adopts a Softmax function to display the multi-classification result in a probability form, and the function formula is as follows:
the last layer of VGG16 neural network outputs layer C neurons, wherein W y Weight of the last layer of the y-th neuron, W c Weights for each neuron; categorical crossentropy classification cross entropy function is selected as a loss function, and the steps are as follows:
n is the number of samples, m is the number of classifications, y im To determine the probability that the input belongs to class m,for inputting the category to which the input actually belongs, the variable is 0 or 1; the cross entropy function is a multi-output loss function, and the loss value of the cross entropy function is also a plurality of loss values;
using RMSprop as an optimizer, limiting oscillations in the vertical direction, allowing the model to converge rapidly, the algorithm equation is as follows:
v dw =β·v dw +(1-β)·dw 2
v db =β·v dw +(1-β)·db 2
dW and db are the differentiation under mini-batch, v dw And v db For an exponentially weighted average, β represents the momentum value, set to 0.9; parameter e for v prevention dw Near 0The weight explosion and gradient rise caused by the time, and alpha is a super parameter;
step 2.3: performing performance evaluation on the model obtained after training;
step 2.4: model optimization, and further optimizing the model by combining the result of the evaluation in the step 2.3;
step 3: defective product rejection
And receiving the defective product signals, enabling the rejecting system to rapidly react, and overturning and rejecting the defective products.
2. The machine vision-based rapid inspection method for packaging defects of confectioneries of claim 1, wherein: in the step 1, a color area array industrial camera is adopted, and different types of product packages are detected by adjusting the vertical direction position; the infrared correlation sensor sensitively captures the passing of the candy and triggers photographing.
3. The machine vision-based rapid inspection method for packaging defects of confectioneries of claim 1, wherein: in the step 2.1, the conversion data format and the size adjustment process are that the format of the image dataset is converted into a data input format facing the Keras frame, the original format of the image obtained in the industrial camera is BYTE, the original format of the image can be input into the neural network for processing after being converted into a Mat matrix, the pixel size of the image obtained by direct shooting is 1000×1000×3, and in order to improve the image processing speed and the recognition speed, the image size is adjusted to 150×150×3 by image interpolation; the data set dividing process is to divide the data set into a training set and a testing set according to the proportion of 8:2; the data enhancement process is to rotate, translate, overturn, scale transform, zoom and randomly cut the images in the training set, expand the training set scale and enhance the generalization capability of the model.
4. The machine vision-based rapid inspection method for packaging defects of confectioneries of claim 1, wherein: in step 2.3, in the defect classification accuracy evaluation process, a test set which does not participate in training is used as an evaluation data set, and the final model is excellent in performance on the test set; specifically, precision and Recall are used as evaluation indexes; the accurate rate is measured by predicting the correct proportion in all the results predicted as positive examples; the recall measures how many actual positive examples are predicted as positive examples by the model; TP is the number of real cases, TN is the number of true cases, FP is the number of false positive cases, FN is the number of false negative cases;
5. the machine vision based rapid inspection method for packaging defects of confectioneries of claim 4, wherein: in step 2.3, the detection speed is evaluated, after the model is required to stably run for a period of time, the detection speed of a single picture is taken as an evaluation index, and the output of candy per minute of a single machine in a production line is 500, so that the detection duration of the single picture is required to be lower than 0.1s; the model needs to be loaded when the model is operated for the first time, so that the single picture detection time of the model which is operated for the second time and is operated stably is used as an evaluation index for reasonably evaluating the good and bad of the model, and the model is acceptable if the evaluation index is lower than 0.1 s.
6. The machine vision based rapid inspection method for packaging defects of confectioneries of claim 5, wherein: in step 2.4, the evaluation index in step 2.3 is analyzed, and the detection precision and the detection speed are a pair of mutually restricted indexes, so that the detection precision needs to be improved on the premise of ensuring the detection speed, and the real-time performance and higher precision of the detection are met.
7. The utility model provides a quick detection device of candy packing defect based on machine vision, includes memory and treater, and the memory stores computer program, its characterized in that; the processor, when executing the computer program, implements the method steps of any of claims 1-6.
8. A computer-readable storage medium having stored thereon a computer program, characterized by: the computer program when executed by a processor performs the method steps of any of claims 1-6.
CN202111137495.9A 2021-09-27 2021-09-27 Quick detection method for surface defects of candy package based on machine vision Active CN113838034B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111137495.9A CN113838034B (en) 2021-09-27 2021-09-27 Quick detection method for surface defects of candy package based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111137495.9A CN113838034B (en) 2021-09-27 2021-09-27 Quick detection method for surface defects of candy package based on machine vision

Publications (2)

Publication Number Publication Date
CN113838034A CN113838034A (en) 2021-12-24
CN113838034B true CN113838034B (en) 2023-11-21

Family

ID=78971026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111137495.9A Active CN113838034B (en) 2021-09-27 2021-09-27 Quick detection method for surface defects of candy package based on machine vision

Country Status (1)

Country Link
CN (1) CN113838034B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114354628B (en) * 2022-01-05 2023-10-27 威海若维信息科技有限公司 Rhizome agricultural product defect detection method based on machine vision
CN114577816A (en) * 2022-01-18 2022-06-03 广州超音速自动化科技股份有限公司 Hydrogen fuel bipolar plate detection method
CN114088730B (en) * 2022-01-24 2022-04-12 心鉴智控(深圳)科技有限公司 Method and system for detecting aluminum-plastic bubble cap defects by using image processing
CN114463327A (en) * 2022-04-08 2022-05-10 深圳市睿阳精视科技有限公司 Multi-shooting imaging detection equipment and method for watermark defect of electronic product lining package
CN115452842A (en) * 2022-10-20 2022-12-09 颖态智能技术(上海)有限公司 Fold detection method for valve bag packaging machine
CN116309597B (en) * 2023-05-23 2023-08-01 成都工业学院 Visual on-line detection method and device for medicine box mixed-loading defects

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190088089A (en) * 2017-12-26 2019-07-26 세종대학교산학협력단 Apparatus and method for detecting defects on welding surface
CN110363253A (en) * 2019-07-25 2019-10-22 安徽工业大学 A kind of Surfaces of Hot Rolled Strip defect classification method based on convolutional neural networks
CN110610475A (en) * 2019-07-07 2019-12-24 河北工业大学 Visual defect detection method of deep convolutional neural network
WO2020048119A1 (en) * 2018-09-04 2020-03-12 Boe Technology Group Co., Ltd. Method and apparatus for training a convolutional neural network to detect defects
CN111862025A (en) * 2020-07-14 2020-10-30 中国船舶重工集团公司第七一六研究所 PCB defect automatic detection method based on deep learning
CN112381787A (en) * 2020-11-12 2021-02-19 福州大学 Steel plate surface defect classification method based on transfer learning
CN213933621U (en) * 2020-10-26 2021-08-10 力度工业智能科技(苏州)有限公司 Candy packaging defect detection platform based on machine vision

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190088089A (en) * 2017-12-26 2019-07-26 세종대학교산학협력단 Apparatus and method for detecting defects on welding surface
WO2020048119A1 (en) * 2018-09-04 2020-03-12 Boe Technology Group Co., Ltd. Method and apparatus for training a convolutional neural network to detect defects
CN110610475A (en) * 2019-07-07 2019-12-24 河北工业大学 Visual defect detection method of deep convolutional neural network
CN110363253A (en) * 2019-07-25 2019-10-22 安徽工业大学 A kind of Surfaces of Hot Rolled Strip defect classification method based on convolutional neural networks
CN111862025A (en) * 2020-07-14 2020-10-30 中国船舶重工集团公司第七一六研究所 PCB defect automatic detection method based on deep learning
CN213933621U (en) * 2020-10-26 2021-08-10 力度工业智能科技(苏州)有限公司 Candy packaging defect detection platform based on machine vision
CN112381787A (en) * 2020-11-12 2021-02-19 福州大学 Steel plate surface defect classification method based on transfer learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于深度卷积神经网络的木材表面缺陷检测系统设计;项宇杰 等;系统仿真技术;第15卷(第04期);253-257 *
基于深度学习的易拉罐缺陷检测技术;张志晟 等;包装工程;第41卷(第19期);259-266 *

Also Published As

Publication number Publication date
CN113838034A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
CN113838034B (en) Quick detection method for surface defects of candy package based on machine vision
CN106875373B (en) Mobile phone screen MURA defect detection method based on convolutional neural network pruning algorithm
Wan et al. Ceramic tile surface defect detection based on deep learning
Wei et al. Real-time implementation of fabric defect detection based on variational automatic encoder with structure similarity
CN101236608B (en) Human face detection method based on picture geometry
US20220254005A1 (en) Yarn quality control
CN109871780B (en) Face quality judgment method and system and face identification method and system
KR20200087297A (en) Defect inspection method and apparatus using image segmentation based on artificial neural network
CN115880298B (en) Glass surface defect detection system based on unsupervised pre-training
CN113077450B (en) Cherry grading detection method and system based on deep convolutional neural network
CN111611889B (en) Miniature insect pest recognition device in farmland based on improved convolutional neural network
CN112580458A (en) Facial expression recognition method, device, equipment and storage medium
CN113627504A (en) Multi-mode multi-scale feature fusion target detection method based on generation of countermeasure network
CN109871821A (en) The pedestrian of adaptive network recognition methods, device, equipment and storage medium again
CN108428324A (en) The detection device of smog in a kind of fire scenario based on convolutional network
CN111210412A (en) Package detection method and device, electronic equipment and storage medium
CN111950357A (en) Marine water surface garbage rapid identification method based on multi-feature YOLOV3
CN116912674A (en) Target detection method and system based on improved YOLOv5s network model under complex water environment
CN117893732A (en) Gangue detection and sorting method based on PLC and deep learning
CN117636045A (en) Wood defect detection system based on image processing
CN112270404A (en) Detection structure and method for bulge defect of fastener product based on ResNet64 network
CN116994049A (en) Full-automatic flat knitting machine and method thereof
CN116740808A (en) Animal behavior recognition method based on deep learning target detection and image classification
CN116030033A (en) Tobacco package appearance detection method and equipment, model training method and device
CN116523853A (en) Chip detection system and method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant