CN114445365A - Banknote printing quality inspection method based on deep learning algorithm - Google Patents

Banknote printing quality inspection method based on deep learning algorithm Download PDF

Info

Publication number
CN114445365A
CN114445365A CN202210089577.9A CN202210089577A CN114445365A CN 114445365 A CN114445365 A CN 114445365A CN 202210089577 A CN202210089577 A CN 202210089577A CN 114445365 A CN114445365 A CN 114445365A
Authority
CN
China
Prior art keywords
defect
image
deep learning
defects
waste
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210089577.9A
Other languages
Chinese (zh)
Inventor
张绍兵
付茂栗
吴俊�
张洋
王斌
夏小东
赵伟君
李腾蛟
祝文培
魏麟
王觅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhongchaokexin Co ltd
Original Assignee
Shenzhen Zhongchaokexin Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhongchaokexin Co ltd filed Critical Shenzhen Zhongchaokexin Co ltd
Priority to CN202210089577.9A priority Critical patent/CN114445365A/en
Publication of CN114445365A publication Critical patent/CN114445365A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30144Printing quality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A banknote printing quality inspection method based on a deep learning algorithm comprises the following steps: segmenting defects in the image through a defect segmentation module, so that the defects are detected from the background image to obtain a defect binary image b; inputting the defect binary image b into a defect judging and rejecting module based on blob analysis for connected domain analysis, extracting the characteristics of each connected domain, and judging whether the defect is mistakenly rejected or actually rejected according to set conditions; and transmitting the real waste image into a real waste image identification and classification module to identify and classify the defect type. According to the invention, the printing defects are detected by dividing the network model, machine inspection waste products are divided into good products and waste products by combining a blob analysis technology, and a large amount of false alarms caused by factors such as unstable imaging are effectively reduced; the waste image is counted by utilizing the classification network model, waste is automatically divided into a waste procedure and multiple waste types, personnel configuration can be effectively reduced, production quality is improved, and production cost is reduced.

Description

Banknote printing quality inspection method based on deep learning algorithm
Technical Field
The invention relates to the field of artificial intelligence and defect detection and identification, in particular to a banknote printing quality inspection method based on a deep learning algorithm.
Background
During the printing process of the bank note, various defects such as distortion of pattern color depth, ink stain, missing printing, scratch, overprinting deviation and the like can occur due to the influence of printing process, mechanical precision and some random factors. At present, banknote printing quality detection is affected by factors such as imaging conditions and overprint deviations, a large number of false reports exist, in addition, various working procedures are multiple in waste types, printing defects are complex in shape and variable in position, the classification and grading effects of defects are poor by using a traditional algorithm, and banknote printing enterprises need to rely on a large number of experienced inspectors to manually review by naked eyes.
Disclosure of Invention
The invention provides a banknote printing quality inspection method based on a deep learning algorithm, which aims to solve at least one technical problem.
To solve the above problems, as one aspect of the present invention, there is provided a banknote printing quality inspection method based on a deep learning algorithm, including:
step 1, segmenting defects in an image through a defect segmentation module, so that the defects are detected from a background image to obtain a defect binary image b;
step 2, inputting the defect binary image b into a defect judging and discarding module based on blob analysis for connected domain analysis, extracting the characteristics of each connected domain, and judging whether the defect is mistakenly discarded or actually discarded according to set conditions;
and 3, transmitting the real waste image into a real waste image identification and classification module to identify and classify the defect types.
Preferably, step 1 comprises: if the background image has no defect, the finally output defect binary image b is an all-0 image, and the process is ended; if the background image has defects, the module divides the defective area, locates the information of the defects, and outputs a binary image b with pixels of 1 at the corresponding position of the defects and pixels of 0 in other non-defective areas.
Preferably, the defect segmentation module segments the defect in the image by:
step a1, data collection: collecting defect sample data sets of different machine tables and different product lines through a traditional detection system;
step a2, data cleaning and labeling: for collected data, firstly, noise data needs to be removed, then, pixel-level labeling is carried out on a defect position to generate a label image, and enhancement transformation processing is carried out on each sample containing the defect to achieve the purpose of expanding a sample data set;
step a3, constructing a network model and training: training an established deep learning model by using a training sample data set after the enhancement transformation processing, wherein the model comprises a convolution layer and a full connection layer and can be described by the following formula:
y=f(x;θ)
y(u,v)=p(x(u,v))
wherein f represents the established network model, x represents the input image with the defect, theta represents the model parameter, y represents the output probability map, (u, v) represents the pixel position index in the image, p represents the probability that a certain pixel is the defect, and then p (x (u, v)) represents the probability that the pixel (u, v) is the defect;
the training process can be regarded as an iterative process for solving the optimization problem, and can be represented by the following formula:
Figure BDA0003488738240000021
wherein gt is the label number image in step a 2;
learning an optimal parameter theta through training to establish a relation between defective pixels and defect probabilities in a training sample data set, calculating the probability that each pixel in a current image is defective through the optimal parameter theta in the process of forward reasoning, and then outputting a probability graph with the same resolution as that of an input image;
step a4, model deployment and post-processing: deploying the optimal model parameters trained in the step a3 on a server for field detection, inputting an image with defects, performing forward reasoning on the image with defects to obtain a corresponding defect probability map, and performing thresholding on the probability map by using a set threshold t to obtain a final defect binary map b, wherein the process can be described by the following formula:
Figure BDA0003488738240000031
preferably, the model comprises 36 convolution units, 6 deconvolution units, and a plurality of basic connection units; each unit is formed by combining a convolution layer, an anti-convolution layer, a batch normalization layer, a nonlinear activation layer, a full connection layer, a characteristic aggregation layer and a pooling layer in different forms according to different functions.
Preferably, the defect rejection module based on blob analysis performs connected domain analysis by:
step b1, analyzing the related information of the defect blob block: firstly, carrying out connected domain analysis on a b diagram, and extracting relevant information of each connected domain and relevant information of a defect position corresponding to a blob block, wherein the relevant information comprises: area, perimeter, centroid; the related information of the defect position corresponding to the blob block comprises the following steps: energy, color distribution, defect position information and the like, and combining the characteristics in the two pieces of related information to form an n-dimensional characteristic vector, wherein a blob block refers to a connected domain with a pixel of 1 in a defect binary image b;
step b2, setting the conditions of the data in the feature vector according to the field situation and the customer requirements: the specific means is mainly realized by setting a threshold, the process can be operated aiming at each feature in the feature vector, finally, the conditions of all the features are integrated for judgment, if the conditions are met, the real waste is judged, and if the conditions are not met, the false waste is judged.
Preferably, in the step b2, the area feature in the feature vector is thresholded, where a value greater than t1 indicates that condition 1 is satisfied, a value between t1 and t2 indicates that condition 2 is satisfied, and a value less than t2 indicates that condition 3 is satisfied, where t2< t 1.
Preferably, the real-waste image identification and classification module identifies and classifies the defect types according to the following steps:
step c1, data collection: the same method as that of step a1 is adopted;
step c2, data cleaning and labeling: the data cleaning process adopts the same method as the step a2, but the marking method needs to identify the defect in the image through the position frame, then sets the corresponding category, and generates a corresponding text file after the marking of each image is completed, and the distance between the text file and the defect position and the category information in the image is short;
step c3, constructing a classification network model and training: training an established deep learning model by using the enhanced training sample data set, wherein the model comprises a convolutional layer and a full connection layer and can be described by the following formula:
c=g(x;ω)
wherein g represents the established classification network model, x represents the input image with defects, ω represents the model parameters, and c represents an n-dimensional vector, wherein:
n=c1+c2
c1 and c2 respectively represent the total number of categories of the defect type and the total number of categories of the defect type;
the training process can be regarded as an iterative process for solving the optimization problem, and can be represented by the following formula:
Figure BDA0003488738240000041
wherein c [ i ] and c [ j ] represent elements of a c vector, pos represents a real label set of the current image, and neg represents other non-real label sets;
training and learning an optimal parameter omega to establish a relation between a defect image in a training sample data set and a process category and a defect type;
step c4, model deployment and post-processing: and c3, deploying the trained optimal model parameters on a server for field detection, inputting an image with defects, outputting an n-dimensional vector, taking the index of the maximum element in the former c1 dimension as a process scrap label, and taking the index of the maximum element in the remaining c2 dimension as a defect type label.
Preferably, the category in c2 includes a process scrap label and a defect category label, and the process scrap label includes: gravure, offset, silk screen, white paper, etc.; the defect category label includes: dirty, dotted, absent, light, cross-color, etc.
Preferably, the model in c3 includes 24 convolution units, each unit is composed of convolution layer, batch normalization layer, nonlinear activation layer, full connection layer, characteristic aggregation layer and pooling layer in different forms according to different functions.
Due to the adoption of the technical scheme, the banknote image can be detected and analyzed by utilizing a deep learning algorithm, on one hand, the printing defects are detected by segmenting a network model, machine-detected waste products are divided into good products and waste products by combining a blob analysis technology, and a large number of false alarms caused by factors such as unstable imaging and the like are effectively reduced; on the other hand, the classification network model is used for counting the wastes of the waste images, the wastes are automatically classified into a waste process and a plurality of waste types, personnel allocation can be effectively reduced, the production quality is improved, and the production cost is reduced.
Drawings
FIG. 1 schematically shows a workflow block diagram of the method of the present invention;
fig. 2 schematically shows a flow chart of the method of the invention.
Detailed Description
The following detailed description of embodiments of the invention, but the invention can be practiced in many different ways, as defined and covered by the claims.
With the rapid development of deep learning algorithms in recent years, recognition technology has been developed dramatically. The invention introduces a deep learning method into the detection and identification of banknote printing defects, can greatly improve the identification accuracy, reduce the missing rate and improve the robustness.
The invention provides a method for realizing defect detection and classification by using a deep learning algorithm, aiming at the problems of more false alarms, poor classification effect and dependence on a large amount of manual examination work of online detection equipment in the existing banknote production process.
The invention mainly comprises the following three modules: (1) the system comprises a defect segmentation module based on deep learning, (2) a defect judgment and waste module based on blob analysis, and (3) an identification and classification module of real waste images. Firstly, the defect in the image is segmented by a defect segmentation module, the function realized in the step is to detect the defect from the background image, if the defect does not exist in the background image, the finally output defect binary image b is a full-0 image, and the process is ended; if the background image has defects, the module divides the defective area, locates the information of the defects, and outputs a binary image b with pixels of 1 at the corresponding position of the defects and pixels of 0 in other non-defective areas. Secondly, inputting the binary image b with the defects into a defect judging and rejecting module based on blob analysis for connected domain analysis, extracting the characteristics of each connected domain, judging and rejecting the defects according to set conditions, and ending the process if the defects are judged to be mistakenly rejected; and if the defect is judged to be real waste, transmitting the real waste image into a third module for identifying and classifying the defect type.
Wherein each module comprises a number of steps, which are described separately below.
A first module: and a defect segmentation module based on deep learning.
The method comprises the following steps:
1) and (6) collecting data.
Through traditional detecting system, can collect the defect sample data set of different board, different product lines.
2) And (4) cleaning and labeling data.
For collected data, noise data needs to be removed first, such as: imaging abnormal images, machine background images, printing paper images and the like; then, marking the defect position at a pixel level to generate a label image; for example, a defective image, and after labeling, the generated label image is a binary image, in which the defective pixel position is 1, and the other non-defective positions are 0.
For banknote printing defects, there are many defect samples and few defect samples, so that each sample containing defects needs to be subjected to an enhanced transformation process, wherein the transformations include: affine transformation, color transformation, distortion deformation, brightness adjustment and the like, so as to achieve the purpose of expanding the sample data set.
3) And constructing and training a network model.
The model is a deep learning model comprising convolution layers and full connection layers, wherein the model comprises 36 convolution units, 6 deconvolution units and some basic connection units: such as a characteristic aggregation unit, a hyperlink unit and the like, wherein each unit is formed by combining a convolution layer, an anti-convolution layer, a batch normalization layer, a nonlinear activation layer, a full connection layer, a characteristic aggregation layer and a pooling layer in different forms according to different functions.
For example, a convolution unit consists of a convolution layer, a batch normalization layer and a nonlinear activation layer; the deconvolution unit consists of an deconvolution layer, a batch normalization layer and a nonlinear activation layer; the characteristic aggregation unit consists of a batch normalization layer and a characteristic aggregation layer.
Training the built deep learning model by using the enhanced training sample data set, wherein the model can be described by the following formula:
y=f(x;θ)
y(u,v)=p(x(u,v))
where f represents the established network model, x represents the input image with defects, θ represents the model parameters, y represents the output probability map, (u, v) represents the pixel location index in the image, and p represents the probability that a pixel is defective, then p (x (u, v)) represents the probability that the pixel (u, v) is defective.
The training process can be regarded as an iterative process for solving the optimization problem, and can be represented by the following formula:
Figure BDA0003488738240000071
wherein gt is the label number image in step 2).
Through training, the optimal parameter theta can be learned to establish the relation between the defective pixel and the defect probability in the training sample data set: therefore, in the process of forward reasoning, the probability that each pixel in the current image is defective can be calculated through the optimal parameter theta, and then a probability map with the same resolution as that of the input image is output.
4) Model deployment and post-processing.
Deploying the optimal model parameters trained in the step 3) on a server for field detection, inputting images with defects, and obtaining a corresponding defect probability map after forward reasoning of the model. Subsequently, the probability map is thresholded by a set threshold t to obtain a final defect binary map b, and the process can be described by the following formula:
Figure BDA0003488738240000081
a second module: and a defect judging and rejecting module based on blob analysis.
The method mainly comprises the following steps:
1) analyzing related information of a defect blob (connected domain), wherein the blob refers to a connected domain with a pixel of 1 in a defect binary image b, so that the connected domain analysis is firstly performed on the b image, and the related information of each connected domain is extracted, and the method comprises the following steps: area, perimeter, centroid; and the related information of the defect position corresponding to the blob block comprises the following steps: energy, color distribution, defect location information, etc., and combines the above features to form an n-dimensional feature vector.
2) According to the field situation and the customer requirements, the data in the feature vector can be subjected to condition setting:
the specific means is mainly realized by setting a threshold, for example, the area feature in the feature vector is set to be a threshold, wherein the threshold is larger than t1 to indicate that the condition 1 is satisfied, the threshold is between t1 and t2 to indicate that the condition 2 is satisfied, and the threshold is smaller than t2 to indicate that the condition 3 is satisfied, wherein t2< t 1; the above process can be operated according to each feature in the feature vector, and finally the conditions of all the features are integrated to judge, if the conditions are met, the result is real waste, and if the conditions are not met, the result is false waste.
A third module: and a real and waste defect identification and classification module.
The method comprises the following steps:
1) and (6) collecting data.
The data collected in the first module 1) step may be carried over.
2) And (4) cleaning and labeling data.
The data cleaning process is the same as that in the step 2) of the first module, but the labeling method is slightly different, and the defects in the image need to be identified through the position frame.
Then, a corresponding category is set, wherein the category comprises two tags, one is a procedure discard tag, and the method mainly comprises the following steps: gravure, offset, silk screen, white paper, etc.; the second is a defect class label: the method mainly comprises the following steps: dirty, dotted, deficient, light, cross-color, etc. And generating a corresponding text file after marking of each image, wherein the distance between the text file and the defect position and the type information in the image is short.
3) And constructing a classification network model and training.
The model is a deep learning model comprising a convolution layer and a full-connection layer, wherein the deep learning model comprises 24 convolution units, and each unit is formed by combining the convolution layer, a batch normalization layer, a nonlinear activation layer, the full-connection layer, a characteristic aggregation layer and a pooling layer in different forms according to different functions.
Training the built deep learning model by using the enhanced training sample data set, wherein the model can be described by the following formula:
c=g(x;ω)
wherein g represents the established classification network model, x represents the input image with defects, ω represents the model parameters, and c represents an n-dimensional vector, wherein:
n=c1+c2
c1 and c2 respectively indicate the total number of categories of defect types and the total number of categories of defect types.
The training process can be regarded as an iterative process for solving the optimization problem, and can be represented by the following formula:
Figure BDA0003488738240000101
where c [ i ] and c [ j ] represent elements of the c-vector, pos represents the set of true tags for the current image, and neg represents the set of other non-true tags.
Through training, the optimal parameter omega can be learned to establish the relation between the defect images in the training sample data set and the process categories and the defect types.
4) Model deployment and post-processing.
Deploying the optimal model parameters trained in the step 3) on a server for field detection, inputting an image with a defect, outputting an n-dimensional vector, taking the index of the largest element in the previous c1 dimension as a process scrap label, and taking the index of the largest element in the remaining c2 dimension as a defect type label.
Due to the adoption of the technical scheme, the banknote image can be detected and analyzed by utilizing a deep learning algorithm, on one hand, the printing defects are detected by segmenting a network model, machine-detected waste products are divided into good products and waste products by combining a blob analysis technology, and a large amount of false alarms caused by factors such as unstable imaging are effectively reduced; on the other hand, the classification network model is utilized to count the wastes of the waste images, the wastes are automatically divided into a waste process and a plurality of waste types, personnel configuration can be effectively reduced, production quality is improved, and production cost is reduced.
The present invention will be described in further detail below with reference to the accompanying drawings and preferred embodiments. Fig. 1 is a schematic workflow diagram of a banknote printing quality inspection method based on a deep learning algorithm in an embodiment of the present invention.
As shown in fig. 1, the method includes:
s1: collecting a defect picture collected and stored by an online detection system of the banknote printing equipment according to the banknote printing defects to be identified and classified and the production line process type;
s2: and performing position positioning, rotation correction and size normalization preprocessing on the image collected in the S1.
S3: labeling and classifying the images in the S2 to respectively generate a training sample set, a test sample set and a training sample set and a test sample set of a real-waste defect recognition classification model based on the deep learning defect segmentation model;
s4: designing a defect segmentation network model and a real-waste defect identification classification network model based on deep learning according to the types and characteristics of banknote printing defects;
s5: training the deep learning network model according to the training sample set and the training process of the deep learning network model;
s6: according to the calibration sample set and the evaluation method of the deep learning network model, evaluating each index of the deep learning network model and optimizing the deep learning network model;
s7: and according to different equipment conditions, deploying and applying the deep learning network model.
Further, in the step S3, labeling and classifying the images specifically include:
firstly, marking corresponding defect positions and image number points on a collected defect image by using marking software to obtain a mask image with the corresponding defect image size consistent, wherein the mask image is used for a defect segmentation module based on deep learning; in addition, classifying and labeling the defect images according to production processes and defect types, wherein the labeled process types are as follows: white paper, offset printing, gravure printing, code printing, coating and seal checking, wherein the marked defect types are as follows: the system comprises a light flower, a missing print, ink dots, ink dirt, wiping dirt, dirt reflection, color mixing, paper diseases, oil dirt, folds, a plate moving and watermark smearing, and is used for an identification and classification module for real waste defects.
In order to ensure the recognition effect, a sufficient and comprehensive data source is provided for deep learning training and verification, each picture contains one defect, and at least more than 1000 pictures are collected for each defect. And then randomly taking a part of the defect pictures and the corresponding marking information as a training sample set according to a certain proportion, and taking the rest as a testing sample set.
Specifically, 70% -80% of the banknote defect pictures and the corresponding labeling information can be used as the training sample set, and the remaining 20% -30% can be used as the testing sample set.
Further, in S4, two network models, namely a defect segmentation network model based on deep learning and an actual defect classification network model, are designed, and the defect segmentation model uses a deep learning model based on a semantic segmentation method, and is specific to a pixel level when processing an image, so as to mark the number of images with defects in the image.
Then, judging the image waste into good products and waste products according to blob (connected domain) information and set conditions through a defect waste judging module based on blob analysis; the real and waste defect identification and classification model is designed into a deep learning model comprising convolution layers and full connection layers according to the defect type number and the defect characteristics, wherein the deep learning model comprises 24 convolution units.
Further, in the training process of the model in S5, the training processes of the two network models, that is, the defect segmentation network model based on deep learning and the real-waste defect identification and classification network model, are the same as follows:
a. combining the defective images in the training set with the standard images without defects, inputting the combined images into a built deep learning network, and carrying out forward propagation to obtain a predicted value;
b. calculating an error value loss of the predicted value and the expected value through an error function;
c. determining a gradient vector in a backward propagation mode;
d. adjusting parameters and weights of the network according to the gradient vector to gradually reduce the loss;
e. and (e) repeating the steps b to e until the set times or loss converges.
Further, in S6, the evaluating each index of the deep learning network model and optimizing the deep learning network model specifically includes: taking the image data of the test sample set as input and passing through a deep learning network model to obtain the accuracy and recall rate of the identification result; and optimizing the learning rate in the network model, retraining the network model and evaluating each index until the accuracy and the recall rate are optimal and are higher than a set threshold value, so that the trained network model is ensured to be available.
Further, in the deployment and application of the network model in S7, in the embodiment of the present invention, a host computer with matched performance needs to be equipped, and the online system image of the banknote printing and detecting device sorter or post-code large sheet inspection machine is accessed through a network connection manner, and the specific application flow is as shown in fig. 2:
a. acquiring an image acquired by the banknote quality detection equipment in a network mode;
b. firstly, obtaining a defect binary mask image by the defect segmentation module based on the deep learning, wherein the mask image is consistent with the banknote image in size, the gray level of the defect region image is 1, and the gray level of the non-defect region image is 0;
c. a defect judging and waste module based on blob analysis is used for classifying the severity of defects according to blob (connected domain) information in a defect binary mask image and by combining the conditions set by rules such as defect size, color difference degree and whether the located pattern is a key region or not, and classifying the images into two categories of good products and waste products;
d. the result of the waste judging module is that the image of the waste is a production defective product with the defect exceeding the quality waste standard of the bank note and can not be delivered from the factory, and the partial image passes through the real waste defect identifying and classifying module to obtain the production process and the defect type information causing the defect;
e. the corresponding recognition result information is stored, and the statistics and analysis of the corresponding data can be used for guiding production, so that the production quality is improved.
The innovation of the invention is that: (1) combining the deep learning technology with the banknote printing image quality inspection; (2) the whole system comprises AI related technology and a waste judgment expert system, is the combination of deep learning and a traditional learning algorithm, realizes the manual regulation of a waste judgment process, and improves the interception rate of waste products.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A banknote printing quality inspection method based on a deep learning algorithm is characterized by comprising the following steps:
step 1, segmenting defects in an image through a defect segmentation module, so that the defects are detected from a background image to obtain a defect binary image b;
step 2, inputting the defect binary image b into a defect judging and discarding module based on blob analysis for connected domain analysis, extracting the characteristics of each connected domain, and judging whether the defect is mistakenly discarded or actually discarded according to set conditions;
and 3, transmitting the real waste image into a real waste image identification and classification module to identify and classify the defect types.
2. The banknote printing quality inspection method based on the deep learning algorithm according to claim 1, wherein the step 1 comprises: if the background image has no defect, the finally output defect binary image b is an all-0 image, and the process is ended; if the background image has defects, the module divides the defective area, locates the information of the defects, and outputs a binary image b with pixels of 1 at the corresponding position of the defects and pixels of 0 in other non-defective areas.
3. The banknote printing quality inspection method based on the deep learning algorithm of claim 1, wherein the defect segmentation module segments the defects in the image by:
step a1, data collection: collecting defect sample data sets of different machine tables and different product lines through a traditional detection system;
step a2, data cleaning and labeling: for collected data, firstly, noise data needs to be removed, then, pixel-level labeling is carried out on a defect position to generate a label image, and enhancement transformation processing is carried out on each sample containing the defect to achieve the purpose of expanding a sample data set;
step a3, constructing a network model and training: training an established deep learning model by using a training sample data set after the enhancement transformation processing, wherein the model comprises a convolution layer and a full connection layer and can be described by the following formula:
y=f(x;θ)
y(u,v)=p(x(u,v))
wherein f represents the established network model, x represents the input image with the defect, theta represents the model parameter, y represents the output probability map, (u, v) represents the pixel position index in the image, p represents the probability that a certain pixel is the defect, and then p (x (u, v)) represents the probability that the pixel (u, v) is the defect;
the training process can be regarded as an iterative process for solving the optimization problem, and can be represented by the following formula:
Figure FDA0003488738230000021
wherein gt is the label number image in step a 2;
learning an optimal parameter theta through training to establish a relation between defective pixels and defect probabilities in a training sample data set, calculating the probability that each pixel in a current image is defective through the optimal parameter theta in the process of forward reasoning, and then outputting a probability graph with the same resolution as that of an input image;
step a4, model deployment and post-processing: deploying the optimal model parameters trained in the step a3 on a server for field detection, inputting an image with defects, performing forward reasoning on the image with the defects to obtain a corresponding defect probability map, and then thresholding the probability map through a set threshold t to obtain a final defect binary map b, wherein the process can be described by the following formula:
Figure FDA0003488738230000031
4. the banknote printing quality inspection method based on the deep learning algorithm of claim 3, wherein the model comprises 36 convolution units, 6 deconvolution units and a plurality of basic connection units; each unit is formed by combining a convolution layer, an anti-convolution layer, a batch normalization layer, a nonlinear activation layer, a full connection layer, a characteristic aggregation layer and a pooling layer in different forms according to different functions.
5. The banknote printing quality inspection method based on the deep learning algorithm according to claim 3 or 4, wherein the defect rejection module based on the blob analysis performs the connected domain analysis by the following steps:
step b1, analyzing the related information of the defect blob block: firstly, carrying out connected domain analysis on a b diagram, and extracting relevant information of each connected domain and relevant information of a defect position corresponding to a blob block, wherein the relevant information comprises: area, perimeter, centroid; the related information of the defect position corresponding to the blob block comprises the following steps: energy, color distribution, defect position information and the like, and combining the characteristics in the two pieces of related information to form an n-dimensional characteristic vector, wherein a blob block refers to a connected domain with a pixel of 1 in a defect binary image b;
step b2, setting the conditions of the data in the feature vector according to the field situation and the customer requirements: the specific means is mainly realized by setting a threshold, the process can be operated aiming at each feature in the feature vector, finally, the conditions of all the features are integrated for judgment, if the conditions are met, the real waste is judged, and if the conditions are not met, the false waste is judged.
6. The banknote printing quality inspection method based on the deep learning algorithm of claim 5, wherein in the step b2, the area feature in the feature vector is thresholded, wherein greater than t1 indicates that condition 1 is satisfied, a value between t1 and t2 indicates that condition 2 is satisfied, and less than t2 indicates that condition 3 is satisfied, wherein t2< t 1.
7. The banknote printing quality inspection method based on the deep learning algorithm according to claim 5 or 6, wherein the real waste image recognition and classification module performs the recognition and classification of the defect types according to the following steps:
step c1, data collection: the same method as that of step a1 is adopted;
step c2, data cleaning and labeling: the data cleaning process adopts the same method as the step a2, but the marking method needs to identify the defect in the image through the position frame, then sets the corresponding category, and generates a corresponding text file after the marking of each image is completed, and the distance between the text file and the defect position and the category information in the image is short;
step c3, constructing a classification network model and training: training an established deep learning model by using the enhanced training sample data set, wherein the model comprises a convolutional layer and a full connection layer and can be described by the following formula:
c=g(x;ω)
wherein g represents the established classification network model, x represents the input image with defects, ω represents the model parameters, and c represents an n-dimensional vector, wherein:
n=c1+c2
c1 and c2 respectively represent the total number of categories of the defect type and the total number of categories of the defect type;
the training process can be regarded as an iterative process for solving the optimization problem, and can be represented by the following formula:
Figure FDA0003488738230000051
wherein c [ i ] and c [ j ] represent elements of a c vector, pos represents a real label set of a current image, and neg represents other non-real label sets;
learning the optimal parameter omega through training to establish the relationship between the defect images in the training sample data set and the process categories and the defect types;
step c4, model deployment and post-processing: and c3, deploying the trained optimal model parameters on a server for field detection, inputting an image with defects, outputting an n-dimensional vector, taking the index of the maximum element in the former c1 dimension as a process scrap label, and taking the index of the maximum element in the remaining c2 dimension as a defect type label.
8. The banknote print quality inspection method based on the deep learning algorithm of claim 7, wherein the categories in c2 include process scrap labels and defect category labels, and the process scrap labels include: gravure, offset, silk screen, white paper, etc.; the defect category label includes: dirty, dotted, absent, light, cross-color, etc.
9. The banknote printing quality inspection method based on the deep learning algorithm of claim 8, wherein the model in c3 comprises 24 convolution units, and each unit is formed by combining a convolution layer, a batch normalization layer, a nonlinear activation layer, a full connection layer, a characteristic aggregation layer and a pooling layer in different forms according to different functions.
CN202210089577.9A 2022-01-25 2022-01-25 Banknote printing quality inspection method based on deep learning algorithm Pending CN114445365A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210089577.9A CN114445365A (en) 2022-01-25 2022-01-25 Banknote printing quality inspection method based on deep learning algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210089577.9A CN114445365A (en) 2022-01-25 2022-01-25 Banknote printing quality inspection method based on deep learning algorithm

Publications (1)

Publication Number Publication Date
CN114445365A true CN114445365A (en) 2022-05-06

Family

ID=81368911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210089577.9A Pending CN114445365A (en) 2022-01-25 2022-01-25 Banknote printing quality inspection method based on deep learning algorithm

Country Status (1)

Country Link
CN (1) CN114445365A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114580579A (en) * 2022-05-07 2022-06-03 北京中科慧眼科技有限公司 Image data rechecking method and system based on neural network classifier
CN114782431A (en) * 2022-06-20 2022-07-22 苏州康代智能科技股份有限公司 Printed circuit board defect detection model training method and defect detection method
CN116482104A (en) * 2023-02-10 2023-07-25 中恒永创(北京)科技有限公司 Thermal transfer film detection method
CN117252851A (en) * 2023-10-16 2023-12-19 北京石栎科技有限公司 Standard quality detection management platform based on image detection and identification

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114580579A (en) * 2022-05-07 2022-06-03 北京中科慧眼科技有限公司 Image data rechecking method and system based on neural network classifier
CN114782431A (en) * 2022-06-20 2022-07-22 苏州康代智能科技股份有限公司 Printed circuit board defect detection model training method and defect detection method
CN116482104A (en) * 2023-02-10 2023-07-25 中恒永创(北京)科技有限公司 Thermal transfer film detection method
CN116482104B (en) * 2023-02-10 2023-12-05 中恒永创(北京)科技有限公司 Thermal transfer film detection method
CN117252851A (en) * 2023-10-16 2023-12-19 北京石栎科技有限公司 Standard quality detection management platform based on image detection and identification
CN117252851B (en) * 2023-10-16 2024-06-07 北京石栎科技有限公司 Standard quality detection management platform based on image detection and identification

Similar Documents

Publication Publication Date Title
CN114445365A (en) Banknote printing quality inspection method based on deep learning algorithm
CN108548820B (en) Cosmetic paper label defect detection method
CN108596166B (en) Container number identification method based on convolutional neural network classification
US6507670B1 (en) System and process for removing a background pattern from a binary image
RU2708422C1 (en) Atm management system and method
US6363162B1 (en) System and process for assessing the quality of a signature within a binary image
CN108918536B (en) Tire mold surface character defect detection method, device, equipment and storage medium
EP3343440A1 (en) Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring
CN109550712A (en) A kind of chemical fiber wire tailfiber open defect detection system and method
CN108765412A (en) A kind of steel strip surface defect sorting technique
CN111242896A (en) Color printing label defect detection and quality rating method
CN110021005A (en) Circuit board flaw screening technique and its device and computer-readable recording medium
CN102157024B (en) System and method for on-line secondary detection checking of checking data of large-sheet checking machine
GB2395263A (en) Image analysis
US8068132B2 (en) Method for identifying Guignardia citricarpa
CN107025716B (en) Method and device for detecting contamination of paper money crown word number
CN114897816A (en) Mask R-CNN mineral particle identification and particle size detection method based on improved Mask
CN102236925B (en) System and method for offline secondary detection and checking of machine detected data of large-piece checker
CN111310628A (en) Paper currency forming mode inspection and identification method based on paper currency printing pattern characteristics
US6415062B1 (en) System and process for repairing a binary image containing discontinuous segments of a character
CN110349125A (en) A kind of LED chip open defect detection method and system based on machine vision
CN116012291A (en) Industrial part image defect detection method and system, electronic equipment and storage medium
CN117541588B (en) Printing defect detection method for paper product
CN112926563A (en) Steel coil jet printing mark fault diagnosis system
CN114549493A (en) Magnetic core defect detection system and method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination