CN108229561B - Particle product defect detection method based on deep learning - Google Patents

Particle product defect detection method based on deep learning Download PDF

Info

Publication number
CN108229561B
CN108229561B CN201810004284.XA CN201810004284A CN108229561B CN 108229561 B CN108229561 B CN 108229561B CN 201810004284 A CN201810004284 A CN 201810004284A CN 108229561 B CN108229561 B CN 108229561B
Authority
CN
China
Prior art keywords
image
neural network
layer
target
constructing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810004284.XA
Other languages
Chinese (zh)
Other versions
CN108229561A (en
Inventor
刘雄飞
田立勋
肖腾
李翠君
肖男
马腾
丛琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xianjian Technology Co ltd
Original Assignee
Beijing Xianjian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xianjian Technology Co ltd filed Critical Beijing Xianjian Technology Co ltd
Priority to CN201810004284.XA priority Critical patent/CN108229561B/en
Publication of CN108229561A publication Critical patent/CN108229561A/en
Application granted granted Critical
Publication of CN108229561B publication Critical patent/CN108229561B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention discloses a granular product defect detection method based on deep learning. The invention has important practical significance: firstly, as the training set data of the method is comprehensive and balanced, the target which cannot be identified by the traditional algorithm can be accurately identified by the algorithm; secondly, the applicant selects an LAB color space to separate the target area from the background, so that the influence of illumination change can be eliminated better, and the accuracy is improved; thirdly, the particle product target is detected based on the C-HOG characteristic and the SVM classification algorithm, and compared with the traditional NCC template matching algorithm, the detection method is higher in speed and higher in precision; finally, aiming at the characteristics of granular products, the invention adopts an improved deep learning algorithm to classify the targets of the granular products, can more accurately extract the texture information of tablets and more accurately classify the targets. Compared with the traditional algorithm, the method comprehensively reduces the false detection rate and the missed detection rate of detection, greatly reduces the production cost of enterprises, and improves the production efficiency.

Description

Particle product defect detection method based on deep learning
Technical Field
The invention relates to the field of machine vision, in particular to application of machine vision and deep learning technologies in product detection, especially medical product detection, and more particularly relates to a particle product defect detection method based on deep learning.
Background
Since the quality of medical supplies is closely related to the health of the drug user, drug production monitoring is also subject to very stringent requirements. In the process of medicine production, the defects of damage, loss, abnormal color, powder leakage, dirt, blister and the like are inevitably generated, and the defective inferior tablets must be accurately sorted out. In the actual production environment at present, the inferior tablets are mostly sorted in a manual detection mode. The mode of detecting by naked eyes not only has great omission factor and false detection rate, but also has slow speed, and seriously restricts the production quality and the production speed of the medicine.
To solve the problem, the present invention has been researched by organizations at home and abroad, for example, the chinese patent application with publication number CN101354360B, discloses an invention named as "method for detecting capsules or tablets packed in aluminum/aluminum blister", and the main principle of the detection method is as follows: in the production process of the medicine, a laser emits laser rays, the rays are kept parallel to a central point connecting line of each row of bubble caps in the aluminum-based medicine plate, and the laser rays are shot to the aluminum-based medicine plate which is packaged with capsules or tablets from an oblique upper direction. Along with the movement of the conveyor belt, when laser just irradiates on the blister, a curve which changes along with the height of the blister is generated on the plane of the medicine plate, and a camera placed right above the aluminum-based medicine plate is used for shooting the medicine plate, so that an image of the concave-convex curve can be obtained. Then, the image obtained is processed by the computer, and the detection result is analyzed and output by using image processing software. However, the detection method adopts the traditional image processing algorithm for detection, the application range of the system is narrow, the false detection rate and the missing detection rate of the detection are high, the disadvantages are obvious in the using process, and the popularization is difficult.
In addition, the method needs to adopt laser for irradiation, and has high cost, high requirement on irradiation precision and poor robustness.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a granular product defect detection method based on deep learning, which has the advantages of high speed, strong robustness, wide application range and low false detection rate and missed detection rate, and achieves the purposes of reducing the production cost of products and improving the production efficiency of the products. After the method of the invention is adopted, the tablet product can be detected without using laser as a light source.
Specifically, the invention provides a particle product defect detection method based on deep learning, which is characterized by comprising the steps of obtaining an image of a target product by using an industrial camera and carrying out defect identification through a neural network based on the obtained image.
In another preferred implementation, the method for detecting the defect of the granular product comprises the following steps:
step S1, acquiring an image of a target product;
step S2, removing background information in the image;
step S3, classifying and labeling all the images;
s4, constructing a neural network structure, and performing neural network weight training by using the labeled image data;
and step S5, detecting the detected product by using the image of the detected product based on the trained model.
In another preferred implementation, the images of the target product employed in the training phase include different lighting, different viewing angles, and different morphologies of defect targets.
In another preferred implementation, the granular product is a tablet encapsulated in an aluminum-based drug plate.
In another preferred implementation, the step S2 includes performing spatial transformation on the target image and performing image segmentation based on the transformed image, and removing the image background to obtain the target area.
In another preferred implementation, the step S2 further includes performing HOG feature extraction on the target image.
In another preferred implementation, the step S3 further includes randomly selecting a first predetermined percentage from all the pictures as a training set, a second predetermined percentage as a cross-validation set, and a third predetermined percentage as a test set.
In another preferred implementation manner, in the step S4, the constructed neural network model is a convolutional neural network model.
In another preferred implementation, the convolutional neural network model is constructed as follows:
a, constructing an input layer of a convolutional neural network, and inputting images of normal tablets and inferior tablets with preset sizes into the convolutional neural network to be used as the input layer of the network;
b, constructing a convolution layer according to the size of the input layer;
c, constructing a pooling layer, and carrying out extreme value taking or average value taking operation on the characteristic graph output by the convolution layer;
d, repeating the steps B and C, and continuously constructing M convolutional layers and N pooling layers;
constructing a full connection layer, processing the obtained features by using the full connection layer, and obtaining feature similarity measurement through training fitting;
f: constructing full connection layers of a second layer and a third layer to realize the classification of targets;
and G, constructing an output layer, processing the output result of the full connection layer by using a Softmax function, and outputting the final result.
Technical effects
Compared with the traditional method, the method has the advantages of great technical effect improvement and important practical significance. Firstly, because the data used in the training stage of the method comprises the defect targets with different illumination, different visual angles and different forms, and the training set data is comprehensive and balanced, the algorithm has stronger robustness, the target which cannot be identified by the traditional algorithm can be accurately identified, and the application scene range of the tablet detection is enlarged; secondly, the applicant converts the RGB image into an LAB color space, so that a target area and a background are separated more accurately, and detection errors are reduced; thirdly, the applicant detects the particle product target by utilizing the C-HOG characteristic and the SVM classification algorithm, and compared with the traditional NCC template matching algorithm, the detection method is higher in speed and higher in precision; finally, because the invention adopts the deep neural network structure independently researched and designed by the applicant for extracting and classifying the characteristics of the granular products, the texture information of the tablets can be more accurately extracted, and the target classification can be more accurately carried out, compared with the traditional algorithm, the algorithm comprehensively reduces the false detection rate and the missed detection rate of the detection, greatly reduces the production cost of enterprises, and improves the production efficiency.
The detection method disclosed by the invention is high in speed, strong in robustness, wide in application range, low in false detection rate and omission factor, and capable of fully automatically detecting the granular products, especially the medicines, so that the aims of reducing the production cost of the products (especially the medicines) and improving the production efficiency of enterprises are fulfilled.
Drawings
FIG. 1 is a schematic flow chart of a method described in an embodiment of the invention;
fig. 2 is a diagram of HOG feature extraction regions;
FIG. 3 is a labeling operation interface for labeling a training set;
FIG. 4 is a diagram of a convolutional neural network architecture;
FIG. 5 is a flow chart of a detection algorithm employed by the method of the present invention.
Detailed Description
The invention is described in detail below with reference to the drawings and the embodiments thereof, but the scope of the invention is not limited thereto. It will be appreciated by those skilled in the art that although the following description will be made by way of example of tablet testing, the method of the present invention is not limited to use with testing of pharmaceutical products, but may be used with similarly packaged and configured products, particularly granular products. Such as candy, chocolate, etc.
Step 1, shooting a target image:
since the defect size of the tablet target is small, shooting using a high-resolution industrial camera is selected in the present embodiment. The camera is arranged right above the conveying belt, the medicines are arranged in the medicine board during detection, one side with the transparent bubble cap faces the upper part of the conveying belt, and the medicines are shot in a photoelectric triggering mode along with the movement of the medicine loading board of the conveying belt to obtain a target image. The applicant then processed the resulting image with software and analyzed the results.
Step 2, target area segmentation:
the background of the image obtained by the industrial camera is mainly the color of the conveyor belt, namely black, and the aluminum-based drug plate is usually white or other light color system, so the applicant chooses to perform image segmentation in the LAB color space, remove the background of the image and obtain the target region. Since the captured image is an RGB image, which cannot be directly converted into an LAB image, it is necessary to convert the image into an XYZ image and then into an LAB image. In the process of converting an RGB image into an XYZ image, the channel value conversion process is as shown in equation 1:
Figure BDA0001538092580000051
in the formula, r, g and B are three channel values of the original RGB image, R, G and B are intermediate results, and the specific expression of the gamma function is shown in formula 2:
Figure BDA0001538092580000052
the applicant edits the nonlinear tone through the function to improve the image contrast. Then, XYZ image channel values are obtained using equation 3:
Figure BDA0001538092580000061
finally, an LAB image is obtained by equation (4):
Figure BDA0001538092580000062
where L, A and B are the values of the three channels in the LAB color space, X, Y and Z are calculated from the RGB values, and Xn、YnAnd ZnIs a constant with values of 95.047, 100.0 and 108.883.
By the above operation, an LAB map of the input image can be obtained. The applicant finds that the aluminum-based drug plate and the background are mainly different in brightness, that is, the difference between the L value of the drug plate and the background is large, and the difference between the A value and the B value is small, so that the background and the aluminum-based drug plate can be conveniently divided by classifying the L value. Namely, the applicant skillfully utilizes the characteristics of medicine detection or the characteristics of products arranged in the aluminum-based packaging board, so that the background color and the detected area are better divided, and the background is better eliminated.
After the conveyor belt background is removed, the tablet image needs to be segmented for defect detection. The applicant notices that the background texture of the aluminum-based medicine plate is single, and the gradient information difference between the target and the background is large, so that the applicant extracts the HOG characteristic from the target image and distinguishes the aluminum-based medicine plate from the target medicine tablet by SVM classification on the HOG characteristic. Firstly, the applicant performs color normalization on an input image to eliminate the influence of illumination change to a certain extent. Then, the applicant constructs a circular window with a radius of 32 pixels, traverses the whole picture in the form of a sliding window, and calculates the gradient value in the window in the form of equation (5):
Figure BDA0001538092580000063
wherein m is the gradient amplitude, and theta is the gradient direction, and the applicant constructs a gradient histogram according to the two to form the characteristics. For a general round tablet, since the tablet is round, in order to better eliminate the influence of the circular arc, in this embodiment, the applicant selects a round area to calculate the gradient, i.e. a C-HOG feature extraction algorithm, and the graph of the gradient calculation window is shown in fig. 2. Of course, for non-circular tablets, other shaped regions may be selected to calculate the gradient.
After obtaining the features, the applicant further performs regional feature normalization to further eliminate the influence of illumination, where the normalization expression is shown in formula 6:
Figure BDA0001538092580000071
wherein v is a non-normalized feature descriptor, | | v | | luminancekIs the k norm of v, k can be 1 or 2, and ε is a small constant.
Finally, the applicant also combines the obtained characteristics, inputs the characteristics into an SVM classifier, and distinguishes the tablet and the aluminum-based medicine board background through classification to obtain specific information of the tablet position. At this point, the target region segmentation is completed.
Step 3, constructing a training set:
after a sufficient amount of picture data is obtained, the applicant randomly selects 60% of all pictures by software as a training set, 30% of all pictures as a cross-validation set and 10% of all pictures as a test set, and then performs annotation work on all data. As the method is used for detecting the defect of the tablet based on deep learning, the quality of data calibration is very important. In order to conveniently label the tablet data, the applicant makes a program for marking the data and sets shortcut keys to conveniently and quickly carry out operations such as labeling, canceling, switching and the like.
As shown in fig. 3, the user is the software operation interface, and clicks the import button to select the folder in which the imported picture is located, and the picture names in the folder are displayed in the lower left corner of the interface in the form of a list. Because the detection problem needs to be solved, the applicant selects a rectangular labeling area, labels the tablet with the defect on the image, and the software stores the four corner coordinates of the labeling frame in an xml file for use in a subsequent training stage.
In order to enhance the robustness of the algorithm, the applicant also performs data enhancement operations on the training set. The method comprises the following steps that an applicant changes brightness values in an LAB color space, various illumination changes in reality are simulated, different brightness values correspond to different light intensities, images with different brightness values are added into a training set, and robustness of an algorithm to the light intensity changes is enhanced; the applicant also carries out rotation operation on the original image, rotates the original image at an angle of 5 degrees as a unit interval, simulates visual angle change, adds the original image and the rotated image into a training set together, and enhances the robustness of the algorithm on the visual angle change; and respectively taking 0.5, 0.75, 1.25 and 1.5 as scaling multiples, carrying out multi-scale scaling operation on the original image, expanding a training set, and enhancing the robustness of the algorithm to scale conversion. By the data enhancement method, the number of training sets can be expanded, the balance of training data categories is maintained, the neural network model can fit targets under various different scenes, and the recognition capability of the algorithm is improved. And at this point, the training set is constructed, and the training set images and the labeled files are placed in the corresponding folders, so that the training operation can be executed.
And 4, constructing a convolutional neural network model and training:
due to the small target size of the tablet, the accurate detection of the tablet is difficult to realize by the common neural network structure. Therefore, the applicant redesigns a convolutional neural network based on an AlexNet network structure, the sensing field of the convolutional neural network is smaller, the convolutional neural network is relatively more accurate, the tablet detection effect is better, and the graph shown in fig. 4 is the structure diagram of the convolutional neural network.
The receptive field of the original AlexNet network structure reaches 195, and if the AlexNet network structure is directly used for detecting particle products, the problems that the receptive field is too large, the detection effect is poor and small defect targets cannot be detected exist. In order to solve the difficulty, the applicant adjusts the receptive field by redesigning the convolution kernel and the step length, and finally obtains the structure diagram of the convolution neural network shown in fig. 4, wherein the receptive field of the network is 53, so that the problems of the existing algorithm are solved, the particle product target can be detected, and the detection precision is greatly improved.
Applicants have constructed a convolutional neural network using the following steps:
constructing an input layer of a convolutional neural network, and inputting images of normal tablets and inferior tablets with the size of 64 multiplied by 64 into the convolutional neural network to be used as the input layer of the network;
constructing convolution layers, wherein a convolution kernel with the size of 7 multiplied by 7 is selected by a first convolution layer according to the size of an input layer, the step length is 1, the filling setting is 3, and the dimension is 24 dimensions, so that each tablet image can obtain the feature with the dimension of 64 multiplied by 24 after being processed by the first convolution layer;
c, building a pooling layer, and carrying out extreme value taking or average value taking operation on the feature graph output by the convolution layer by using a convolution kernel with the size of 3 multiplied by 3 to extract more global features and prevent overfitting;
d, repeating the steps B and C, continuously constructing four convolution layers and three pooling layers, and finally outputting a characteristic dimension of 8 multiplied by 64;
constructing a full connection layer, and obtaining the characteristic similarity measurement most suitable for the problem through training and fitting;
f, constructing a full connection layer of a second layer and a third layer, and finally outputting a two-dimensional characteristic to realize the classification of the targets;
and G, constructing an output layer, and processing the output result of the full connection layer by the applicant by using a Softmax function to output a final result.
After the convolutional neural network is constructed, the applicant trains by using the data obtained in the step 3, calculates the weight value of each node of the network by adopting a random gradient descent algorithm and a back propagation algorithm, and trains a deep neural network model in an off-line manner. Through repeated tests, the applicant sets the learning rate of training to 0.01, the impulse to 0.9 and the weight attenuation value to 0.0005, and under the environment using an Intel I7CPU and an Nvidia 1060GPU, the training can be performed for convergence within about 12 hours.
In the training process, in order to improve the generalization capability of the network and prevent the model from being fitted to the positive sample or the negative sample too much, the applicant also performs the balanced sampling work of the positive sample and the negative sample. The applicant uses a sampler to randomly sample in a training set, so as to ensure that positive and negative samples in a small batch of training sets meet a certain proportion. Through repeated tests, the applicant finds that when the ratio of the positive sample to the negative sample is 1:2, the test result is the best, and the overfitting cannot be caused, and the overlarge memory load cannot be caused. Through tests, when the video memory capacity is 6G, the batch size is set to be 32 to be optimal, so that the convergence speed can be ensured, and the video memory can not overflow.
After the setting is completed, the model training work can be started, and the objective function is reduced in a logarithmic mode along with the increase of the iteration times until convergence. After training is complete, the model is stored in the model folder in the model file format.
Step 5, testing the model, evaluating the performance:
after the model training is completed, the applicant starts the test work. The whole algorithm testing process is shown in fig. 5, the result is obtained by means of image shooting, background removal, HOG feature extraction, SVM classification, convolutional neural network detection and the like, and the whole detection process is shown in fig. 4. The applicant measures the detection result by using Precision (Precision) and Recall (Recall), and the expression of the Recall and the Precision is shown in formula 7. The recall rate reflects the proportion of the number of the positive samples in all the positive samples which can be detected by the algorithm, and the accuracy rate reflects the probability of detecting the true positive samples in the obtained positive samples.
Figure BDA0001538092580000101
Figure BDA0001538092580000102
Only when the comprehensive accuracy rate and the recall rate are analyzed together, the performance of the algorithm can be accurately evaluated.
If the traditional method and the original AlexNet network structure are directly used for detection, the recall rate and accuracy rate are respectively 98.3% and 91.2% due to the specificity of the tablet size.
Correspondingly, the applicant selects to separate the target area from the background in an LAB color space, selects to use the C-HOG characteristic and the SVM classifier to detect the particle product, and uses the neural network structure independently designed by the applicant to classify the particle product target, so that the recall rate of the detection reaches 99.98%, the accuracy rate is 99.96%, and the requirement of industrial production is met.
The foregoing is considered as illustrative and not restrictive, and all changes that come within the spirit and scope of the invention are intended to be embraced therein.
While the principles of the invention have been described in detail in connection with the preferred embodiments thereof, it will be understood by those skilled in the art that the foregoing embodiments are merely illustrative of exemplary implementations of the invention and are not limiting of the scope of the invention. The details of the embodiments are not to be interpreted as limiting the scope of the invention, and any obvious changes, such as equivalent alterations, simple substitutions and the like, based on the technical solution of the invention, can be interpreted without departing from the spirit and scope of the invention.

Claims (4)

1. A particle product defect detection method based on deep learning is characterized by comprising the steps of obtaining an image of a target product by using an industrial camera and carrying out defect identification through a neural network based on the obtained image, wherein the particle product is a tablet packaged in an aluminum-based medicine plate,
the particle product defect detection method comprises the following steps:
step S1, acquiring an image of a target product;
step S2, removing background information in the image;
step S3, classifying and labeling all the images;
s4, constructing a neural network structure, and performing neural network weight training by using the labeled image data;
step S5, based on the trained model, using the image of the detected product to detect the detected product,
the convolutional neural network model is constructed in the following way:
constructing an input layer of a convolutional neural network, and inputting images of normal tablets and inferior tablets with the size of 64 multiplied by 64 into the convolutional neural network to be used as the input layer of the network;
b, building convolution layers, wherein a convolution kernel with the size of 7 multiplied by 7 is selected by the first convolution layer according to the size of the input layer, the step length is 1, the filling setting is 3, and the dimension is 24 dimensions, so that each tablet image is processed by the first convolution layer to obtain the feature with the dimension of 64 multiplied by 24;
c, building a pooling layer, and carrying out extreme value taking or average value taking operation on the feature graph output by the convolution layer by using a convolution kernel with the size of 3 multiplied by 3 to extract more global features and prevent overfitting;
d, repeating the steps B and C, continuously constructing four convolution layers and three pooling layers, and finally outputting a characteristic dimension of 8 multiplied by 64;
constructing a full connection layer, and obtaining characteristic similarity measurement through training fitting;
f, constructing a full connection layer of a second layer and a third layer, and finally outputting a two-dimensional characteristic to realize the classification of the targets;
g, constructing an output layer, processing the output result of the full connection layer by using a Softmax function, and outputting a final result;
the step S2 includes: performing space conversion on a target image, converting RGB into an XYZ image, converting the XYZ image into an LAB image, performing image segmentation based on the converted image, and removing an image background to obtain a target area; HOG characteristic extraction is carried out on the target area, and SVM classification is carried out on the HOG characteristic to distinguish an aluminum-based medicine board from a target tablet, and the method comprises the following steps: carrying out color normalization on an input image; constructing a circular window with a preset radius, traversing the whole picture in a sliding window mode, and calculating gradient values in the window in a mode of an equation (5):
Figure FDA0003547533120000021
wherein m is the gradient amplitude, theta is the gradient direction,
the method further comprises the following steps: and a sampler is used for randomly sampling in the training set, so that positive and negative samples in a small batch of training sets meet the proportion of 1: 2.
2. The method of claim 1, wherein the images of the target product used in the training phase include defect targets with different illumination, different viewing angles, and different shapes.
3. The method for detecting defects of particle products based on deep learning of claim 1, wherein the step S3 further comprises randomly selecting a first predetermined percentage from all pictures as a training set, a second predetermined percentage as a cross validation set, and a third predetermined percentage as a test set.
4. The method for detecting particle product defects based on deep learning of claim 1, wherein in the step S4, the constructed neural network model is a convolutional neural network model.
CN201810004284.XA 2018-01-03 2018-01-03 Particle product defect detection method based on deep learning Active CN108229561B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810004284.XA CN108229561B (en) 2018-01-03 2018-01-03 Particle product defect detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810004284.XA CN108229561B (en) 2018-01-03 2018-01-03 Particle product defect detection method based on deep learning

Publications (2)

Publication Number Publication Date
CN108229561A CN108229561A (en) 2018-06-29
CN108229561B true CN108229561B (en) 2022-05-13

Family

ID=62645141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810004284.XA Active CN108229561B (en) 2018-01-03 2018-01-03 Particle product defect detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN108229561B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215015A (en) * 2018-07-24 2019-01-15 北京工业大学 A kind of online visible detection method of silk cocoon based on convolutional neural networks
EP3867919A4 (en) * 2018-10-19 2022-08-31 F. Hoffmann-La Roche AG Defect detection in lyophilized drug products with convolutional neural networks
CN111199175A (en) * 2018-11-20 2020-05-26 株式会社日立制作所 Training method and device for target detection network model
CN109886325B (en) * 2019-02-01 2022-11-29 辽宁工程技术大学 Template selection and accelerated matching method for nonlinear color space classification
CN110142785A (en) * 2019-06-25 2019-08-20 山东沐点智能科技有限公司 A kind of crusing robot visual servo method based on target detection
CN111179241A (en) * 2019-12-25 2020-05-19 成都数之联科技有限公司 Panel defect detection and classification method and system
CN111209950B (en) * 2020-01-02 2023-10-10 格朗思(天津)视觉科技有限公司 Capsule identification and detection method and system based on X-ray imaging and deep learning
CN111242177B (en) * 2020-01-02 2023-07-21 天津瑟威兰斯科技有限公司 Method, system and equipment for detecting medicine package based on convolutional neural network
CN111695504A (en) * 2020-06-11 2020-09-22 重庆大学 Fusion type automatic driving target detection method
CN111768402A (en) * 2020-07-08 2020-10-13 中国农业大学 MU-SVM-based method for evaluating freshness of iced pomfret
CN112345534B (en) * 2020-10-30 2023-08-04 上海电机学院 Defect detection method and system for particles in bubble plate based on vision
CN112834518A (en) * 2021-01-06 2021-05-25 优刻得科技股份有限公司 Particle defect detection method, system, device and medium
CN112750109B (en) * 2021-01-14 2023-06-30 金陵科技学院 Pharmaceutical equipment safety monitoring method based on morphology and deep learning
EP4123506A1 (en) * 2021-07-20 2023-01-25 Fujitsu Technology Solutions GmbH Method and device for analyzing a product, training method, system, computer program, and computer readable storage medium
CN113899747A (en) * 2021-10-29 2022-01-07 成都天运和科技有限公司 Intelligent detection system for medicine defects
CN116151691A (en) * 2023-04-13 2023-05-23 山东一方制药有限公司 Traditional Chinese medicine formula granule preparation quality supervision system based on artificial intelligence

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105572137A (en) * 2015-12-15 2016-05-11 重庆瑞阳科技股份有限公司 Appearance defect test method
US20170212829A1 (en) * 2016-01-21 2017-07-27 American Software Safety Reliability Company Deep Learning Source Code Analyzer and Repairer
CN106875381B (en) * 2017-01-17 2020-04-28 同济大学 Mobile phone shell defect detection method based on deep learning
CN106952257B (en) * 2017-03-21 2019-12-03 南京大学 A kind of curved surface label open defect detection method based on template matching and similarity calculation
CN107123111B (en) * 2017-04-14 2020-01-24 惠州旭鑫智能技术有限公司 Deep residual error network construction method for mobile phone screen defect detection

Also Published As

Publication number Publication date
CN108229561A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
CN108229561B (en) Particle product defect detection method based on deep learning
KR101932009B1 (en) Image processing apparatus and method for multiple object detection
US9639748B2 (en) Method for detecting persons using 1D depths and 2D texture
CN104077577A (en) Trademark detection method based on convolutional neural network
TWI254891B (en) Face image detection method, face image detection system, and face image detection program
CN102341810B (en) The method and apparatus of the presence of cap body on bottle and container and the automation detection of type
JP7450848B2 (en) Transparency detection method based on machine vision
CN110032946B (en) Aluminum/aluminum blister packaging tablet identification and positioning method based on machine vision
CN110136130A (en) A kind of method and device of testing product defect
JP7412556B2 (en) Method and apparatus for identifying effect pigments in target coatings
CN112345534B (en) Defect detection method and system for particles in bubble plate based on vision
CN109977834B (en) Method and device for segmenting human hand and interactive object from depth image
Pieringer et al. Flaw detection in aluminium die castings using simultaneous combination of multiple views
Waranusast et al. Egg size classification on Android mobile devices using image processing and machine learning
CN107576600B (en) Quick detection method for matcha granularity grade
EP3973447B1 (en) Surface recognition
US11966842B2 (en) Systems and methods to train a cell object detector
Suksawatchon et al. Shape Recognition Using Unconstrained Pill Images Based on Deep Convolution Network
Zhang et al. IDDM: An incremental dual-network detection model for in-situ inspection of large-scale complex product
CN104036258A (en) Pedestrian detection method under low resolution and based on sparse representation processing
Vieira et al. Human epithelial type 2 (HEp-2) cell classification by using a multiresolution texture descriptor
Bai et al. Rapid and non-destructive quality grade assessment of Hanyuan Zanthoxylum bungeanum fruit using a smartphone application integrating computer vision systems and convolutional neural networks
Onatayo et al. Ultraviolet Radiation Transmission in Building’s Fenestration: Part II, Exploring Digital Imaging, UV Photography, Image Processing, and Computer Vision Techniques
Xu et al. Find the centroid: A vision‐based approach for optimal object grasping
JPWO2022247162A5 (en)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant