CN111444866A - Paper making cause inspection and identification method based on deep learning - Google Patents

Paper making cause inspection and identification method based on deep learning Download PDF

Info

Publication number
CN111444866A
CN111444866A CN202010245087.4A CN202010245087A CN111444866A CN 111444866 A CN111444866 A CN 111444866A CN 202010245087 A CN202010245087 A CN 202010245087A CN 111444866 A CN111444866 A CN 111444866A
Authority
CN
China
Prior art keywords
image
texture
neural network
images
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010245087.4A
Other languages
Chinese (zh)
Other versions
CN111444866B (en
Inventor
朱子奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN202010245087.4A priority Critical patent/CN111444866B/en
Publication of CN111444866A publication Critical patent/CN111444866A/en
Application granted granted Critical
Publication of CN111444866B publication Critical patent/CN111444866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/80Recognising image objects characterised by unique random patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a paper making cause inspection and identification method based on deep learning, which is characterized by comprising the following steps of: step 1: predefining paper category labels; step 2, collecting texture images; and step 3: enhancing image data; and 4, step 4: extracting image texture features; and 5, acquiring texture images of each paper of the S-type books under X different specific acquisition conditions, and completing data enhancement and texture feature extraction. And 6, acquiring texture images of the paper to be identified under X different specific acquisition conditions, completing data enhancement and texture feature extraction, classifying image texture features to obtain predicted labels and label probability values of the paper to be identified under X different specific acquisition conditions, and recalculating the label probability values according to weights to obtain the final label of the paper to be identified. The paper making cause inspection and identification method based on deep learning has the advantages of rapid and accurate paper identification, less time consumption and low cost.

Description

Paper making cause inspection and identification method based on deep learning
Technical Field
The invention relates to the technical field of computer vision, in particular to a paper making cause inspection and identification method based on deep learning.
Background
Paper is a special material, which is a special thin-sheet material with the property of a porous network and is formed by interweaving and combining fibers (mainly plant fibers) and other solid particulate substances (such as sizing materials, fillers, auxiliary agents and the like).
Since paper is an important article in politics, economy, culture, art and daily life and is widely applied to the industries of education, commerce, industry and the like, cases related to the paper are also compared with cases such as contracts, will orders, stocks, checks, calligraphy and painting, leaflets, anonymous letters and even dead blocks, murder wrappages and the like. Therefore, the examination of the paper type and the brand can play a key role in detecting cases and has important significance in court science. The analysis and inspection of paper evidence plays an important role in case detection, and therefore, paper identification is always an important research direction in the field of forensic science research.
The preparation process of the paper mainly comprises the basic processes of paper material preparation, pulping, filling, papermaking, dehydration, drying, press polishing, reeling and the like. Therefore, the existing paper inspection technology mainly starts from the raw materials of paper, and uses different methods according to the components of different raw materials to obtain accurate identification results. However, the prior art identification methods have drawbacks: the method has low robustness to external pollution factors, low analysis speed and high cost, can not be widely applied to the requirements of various cases, has influence on paper manufacturing cause analysis by human factors, is not absolutely stable and reliable, and consumes long time for inspection and analysis.
Disclosure of Invention
In view of the above problems, the present invention provides a method for detecting and identifying a cause of paper manufacture based on deep learning, so as to help police department to reconnaissance paper.
The invention provides a paper making cause inspection and identification method based on deep learning, which is characterized by comprising the following steps of: step 1: different labels are respectively arranged on the paper of books with different printing modes in advance, the paper of the same book belongs to the same class, one label is adopted for the same book, and the number of the book types is represented as S; step 2: collecting each type of book paper in a specific positionCollection Condition CxObtaining K texture images of S × k texture images for S books, 3, enhancing image data of each texture image acquired in the step 2 in multiple modes, obtaining Sum enhanced images after enhancing the image data of each texture image, obtaining S × k × Sum enhanced images after enhancing the image data of S × k texture images, and 4, extracting image texture features of the S × k × Sum enhanced images obtained in the step 3 by convolutional neural network model training to obtain C texture images matched with the specific acquisition condition CxModel of well-trained convolutional neural network under conditionNx(ii) a And 5: collecting all S types of books according to the mode of step 2 under X different specific collection conditions C ═ CxTexture images under 1,2,3, X are processed according to the modes from step 3 to step 4 to obtain specific acquisition conditions C ═ { C ═ different from XxVarious convolution neural network models corresponding to | X ═ 1,2,3, · · · ·, X · · |)N={ModelNx1,2,3, ·, X }; step 6: for the paper to be identified, all X specific collection conditions C ═ C are completed in the manner of step 5xThe acquisition of texture images under 1,2,3, X, the same special acquisition condition CxCollecting A texture images, extracting image texture features according to the step 4, and using the corresponding trained convolutional neural network ModelN={ModelNxCarrying out image texture feature classification on | X ═ 1,2,3, · · ·, X } respectively to obtain C ═ C, X, and obtaining the A images of the paper to be identified under all X specific acquisition conditionsxAnd (3) predicting the label and the label probability value under 1,2,3, X, and finally recalculating the label probability value according to the weight to obtain the final label of the paper to be authenticated, wherein the final label is used as a final authentication result.
The paper production cause inspection and identification method based on deep learning provided by the invention is also characterized in that: wherein a certain specific collection condition CxRefers to the collection condition that satisfies a certain optical magnification and a certain light source environment.
The paper production cause inspection and identification method based on deep learning provided by the invention is also characterized in that: wherein, book kind number S satisfies: s is more than or equal to 78; in the step 2, K texture images are collected at K positions of blank places of each book type, wherein K is more than or equal to 30.
The paper production cause inspection and identification method based on deep learning provided by the invention is also characterized in that: and 3, performing data enhancement processing on each texture image in the step 3, wherein the data enhancement processing comprises image scaling, image rotation, image overturning, image contrast enhancement, homomorphic filtering processing and image denoising processing.
The paper production cause inspection identification method based on depth learning further has the characteristics that for each texture image, the texture image is read firstly and embodied in a vector form, then data enhancement is carried out according to the following steps of image scaling, scaling the vectors by 0.75 time, 0.5 time, 1 time and 2 times in a cutting mode, outputting 4 vectors, restoring the scaled vectors into the texture image, carrying out specific cutting to obtain 4 enhanced images, image rotation, carrying out average rotation for 15 degrees in a rotation mode in a range of 0-360 degrees to obtain 24 vectors with different rotation angles, restoring the rotated vectors into the texture image, carrying out specific cutting to obtain 24 enhanced images, image turning, carrying out up-down turning, left-right turning, right turning on the vectors in a turning mode to obtain 4 vectors, restoring the scaled vectors into the texture image, carrying out specific cutting to obtain the 24 enhanced images, carrying out filtering to obtain 4 enhanced images, carrying out up-down turning, left-right turning on the diagonal lines in a turning mode, carrying out equalization filtering processing on the vectors by a diagonal line, carrying out equalization processing on the filtered vectors by a filter algorithm of a filter, carrying out equalization processing on the filtered vectors, carrying out equalization processing on a filter, carrying out equalization processing on the filtered images, carrying out equalization processing on a filter, carrying out equalization processing on the filtered vectors, carrying out a filter, carrying out equalization processing on the filtered vectors, carrying out equalization processing on the filtered vectors, carrying out the filtered equalization processing, carrying out the equalization processing on the equalization processing, carrying out the equalization.
The paper production cause inspection and identification method based on deep learning provided by the invention is further characterized in that the data enhancement processing of each texture image in the step 3 further comprises the following steps (a) to (e) of sequentially carrying out the following steps of (a) reading the texture image, expressing the length × width of the texture image as M × N, and expressing each texture image with the upper left corner of the image as a starting point s and the cutting conditions of M × N-300 × 300, M × N-600 × 600 and M × N-300 × N, M × N-M × 600pFirst, the starting point s is moved transversely, with p equal to 11Sequentially cutting out images with the length of × and the width of m × n with the step size of 100, (b) taking the starting point s as the starting point1The position of the lower 100 pixels is taken as a new starting point spP is 2, and the starting point s is moved transversely2The step length is 100, the image with the length of × and the width of m × n is cut out in sequence, (c) the image is cut out by a starting point s2The position of the lower 100 pixels is taken as a new starting point spP is 3, and the starting point s is moved transversely3Step of 100, cutting out images with length of × and width of m × n, and (d) cutting out images with starting point s3The lower 100 pixels is taken as a starting point spP is 4, and the starting point s is moved transversely4Sequentially cutting out images with the length of × and the width of m × n with the step size of 100, (e) repeating the steps (a) to (d) and using the previous starting point sp-1The next starting point s is located at the lower 100 pixelspThe method changes the starting point until the distance from the starting point to the image edge does not satisfy the M × n clipping condition any more, and the clipping conditions of M × n-300 × 300, M × n-600 × 600, M × n-300 × N, M × n-M × 600 are respectively clipped to obtain S × through the steps (a) to (e) aboven1、Sn2、Sn3、Sn4Stretching the new texture image, and performing specific cutting on the new texture image to obtain Sn1+Sn2+Sn3+Sn4An enhanced image; each texture image is processed through the steps2, Sum images obtained by the image data enhancement processing satisfy the following conditions: sum 4+24+4+1+1+1+ Sn1+Sn2+Sn3+Sn4
The paper production cause inspection and identification method based on deep learning provided by the invention is also characterized in that the specific cutting is 300 × 300 by taking the image center as the center of the cutting image.
The paper production cause inspection and identification method based on deep learning provided by the invention is further characterized in that in the step 4-1, the enhanced images of S × k × Sum obtained in the step 3 are respectively added with corresponding labels to be used as data sets D, and the data sets D are divided into training sets T ═ T { (T) }i-S,/i ═ 1,2,3,. and V ═ ViAnd (i) {1,2,3,. S }, training texture features by using a training set T through a convolutional neural network model to extract image texture features, training the convolutional neural network model to obtain weights of the convolutional neural network model, testing the convolutional neural network model by using a verification set V, and obtaining a specific acquisition condition C when the accuracy reaches above a threshold value QaccxModel of well-trained convolutional neural network under conditionlocal_x(ii) a Step 4-2, testing the convolutional neural network Model by adopting a verification set Vlocal_xSelecting the S with the best test effectLBook-like, SL< S, corresponding to SLS acquired by corresponding book-like registration acquisitionL× k × Sum texture images are added with corresponding labels to form a data set D ', and the data set D ' is used as a training set T ' { T }i'/i={1,2,3,...SLV and verification set V ═ Vi'/i={1,2,3,...SLModel using convolutional neural network ModellocalTraining and testing are carried out again, and when the accuracy reaches the threshold value Qacc or above, a specific acquisition condition C is obtainedxModel of well-trained convolutional neural network under conditionNx'; step 4-3, modifying the convolutional neural network model, adding a full connection layer on the last layer, changing the activation function into sigmoid to multi-label classification, fixing all the layers in the front, retraining and testing the data set D, and adopting the convolutional neural network modelModel typeNxThe weight of the' is used as the initial weight of the modified convolutional neural network model, the training set T is used for carrying out image texture feature extraction through the training texture feature of the modified convolutional neural network model, the weight of the modified convolutional neural network model is obtained through training of the convolutional neural network model, meanwhile, the modified convolutional neural network model is tested through the verification set V, and after the accuracy rate reaches above the threshold value Qacc, the specific acquisition condition C is obtainedxModel of well-trained convolutional neural network under conditionNx
The paper production cause inspection and identification method based on deep learning provided by the invention is also characterized in that: wherein the threshold Qacc is 98%.
The paper production cause inspection and identification method based on deep learning provided by the invention is also characterized in that: in step 6, the trained convolutional neural network Model is used for the paper to be identifiedNxAnd (4) carrying out image texture feature classification to obtain A prediction labels and label probability values, wherein the weight of each prediction label and the probability value thereof is 1/A.
The invention has the following functions and effects:
compared with the prior art, the paper making cause inspection and identification method based on deep learning has the characteristics of high identification speed, low cost, high robustness to external pollution factors, reduction of influence of human factors on material evidence analysis, and stability and reliability. The paper identification method has the advantages of quickly and accurately identifying the paper, consuming less time and being low in cost, and the paper can be classified to help a public security department to reconnaissance the paper, so that the method starts from a mode of checking and identifying the production cause of the paper to assist in the case reconnaissance, and has wider popularization value.
Drawings
FIG. 1 is a flow chart of a method for identifying a cause of paper production based on deep learning according to the present invention;
FIG. 2 is a schematic diagram of several ways of image data enhancement in the paper production cause inspection and identification method based on deep learning according to the present invention;
FIG. 3 is a schematic view of a process of collecting several light source environment images with several optical magnifications in the paper production cause inspection and identification method based on deep learning.
Detailed Description
In order to make the technical means, creation features, achievement purposes and effects of the present invention easy to understand, the following embodiments specifically describe the paper making cause inspection and identification method based on deep learning in the present invention with reference to the accompanying drawings.
As shown in fig. 1, the method for verifying and identifying paper production cause based on deep learning in the present embodiment includes the following steps:
step 1, predefining paper category labels
Different labels are respectively arranged on the book paper with different printing modes in advance. Because the paper production causes of the same book are the same (the default does not include book covers, and the common production causes of the same book covers and internal paper are different), the paper of the same book is assigned to the same class, the same book adopts a label, and the number of book types is represented as S. In order to have enough samples in the database, the number of book categories S needed is preferably satisfied: s is more than or equal to 78.
Step 2, collecting texture images
Collecting each type of book paper under a certain collecting condition CxK texture images of the S-type books under a certain specific acquisition condition CxThe next co-acquisition yielded S × k texture images.
Wherein, the k texture images of each book type are obtained as follows: selecting the blank of each book type, and predicting to collect the blank of different areas at k positions, wherein k is more than or equal to 30. Dividing k texture images acquired by each book type into a training set T according to the proportion of 1:1liAnd verification set VliAnd the training set of the texture image acquired corresponding to the S-type book is represented as Tl={Tli1,2,3,. S }, the verification set denoted Vl={Vli/i=1,2,3,...S}。
Certain specific collection conditions CxRefers to a specific light source environment satisfying a specific optical magnification at the same timeThe collection condition of (1). The optical magnification and the light source environment can be customized, for example, three specific optical magnifications of 16, 30 and 60 and three specific light source environments of backlight, sidelight and annular light are illustrated in fig. 2, and 9 specific collection conditions can be formed in the embodiment.
For each book type a certain specific collection condition CxThe texture image obtained by the following acquisition is expressed by an array: i isx={Ixt1,2, 3. k. x correspondingly represents a specific acquisition condition CxT corresponds to the number indicating the blank.
The specific collection method comprises the following steps:
(1) acquiring texture images of a certain type of books under the environment of a certain light source at a certain optical magnification by using an industrial camera and configuring lens equipment;
(2) removing a blurred image caused by focusing error in the acquired texture image;
(3) removing negative sample images of texture patterns damaged by damage and ink marks in the collected texture images;
(4) and additionally acquiring the texture image of the paper under the optical magnification and the light source environment, so that the number k of the texture images acquired by the paper under the acquisition condition is more than or equal to 30.
Step 3, image data enhancement
Performing image data enhancement in multiple modes on each texture image acquired in the step 2, and performing image data enhancement on each texture image to obtain Sum enhanced images, wherein Sum is 4+24+4+1+1+1+ Sn1+Sn2+Sn3+Sn4And enhancing the image data of S × k texture images to obtain S × k × Sum enhanced images.
Specifically, the image data enhancement processing is performed on each texture image in the following ways to obtain:
step 3-1, zooming the image:
the texture image is read and represented in a vector form, the vectors are respectively scaled by 0.75 times, 0.5 times, 1 time and 2 times in a cutting mode, 4 vectors are output, the scaled vectors are restored into 4 texture images, and specific cutting (the center of the image is taken as the center of the cut image, the cutting size is 300 × 300) is carried out to obtain 4 enhanced images.
Step 3-2, image rotation:
reading the texture image, embodying the texture image in a vector form, sequentially and averagely rotating the vector within the range of 0-360 degrees by 15 degrees each time in a rotating mode to obtain 24 vectors with different rotation angles, restoring the rotated vectors into 24 texture images, and performing specific cutting (with the image center as the center of the cut image and the cutting size of 300 × 300) to obtain 24 enhanced images.
Step 3-3, image turning:
reading the texture image, showing the texture image in a vector form, turning the vector up and down, turning the vector left and right, turning the left diagonal and turning the right diagonal in a turning mode to obtain 4 vectors, restoring the scaled vector into 4 texture images, and performing specific cropping (with the image center as the center of the cropped image, the cropping size is 300 × 300), so as to obtain 4 enhanced images.
Step 3-4, enhancing the image contrast:
reading the texture image, embodying the texture image in a vector form, realizing contrast enhancement by a histogram equalization method, processing the vector to enable the mapping range to be [0, 255] to obtain a vector, restoring the vector after the contrast enhancement into 1 texture image, and performing specific cropping (taking the image center as the center of the cropped image and the cropping size to be 300 × 300) to obtain 1 enhanced image.
Step 3-5, homomorphic filtering treatment:
the texture image is read, embodied in a vector form, the vector is processed by a homomorphic filtering method, one filter function H (u, v) is assigned so that rH is 5 and r L is 0.5, the vector is obtained, the homomorphic filtered vector is restored to 1 texture image, and then specific cropping (with the image center as the center of the cropped image and the cropping size of 300 × 300) is performed, so that 1 enhanced image is obtained.
Step 3-6, denoising the image:
the texture image is read and embodied in a vector form, and is processed by any one of a mean filtering method, a Gaussian filtering method, a bilateral filtering method, a guided filtering method, an N L M operator, a BM3D operator, a frequency domain filtering method, a wavelet domain filtering method, a P-M equation denoising method and a TV denoising method to obtain vectors, the vectors are restored to 1 texture image, in the embodiment, the Gaussian filtering method is adopted to perform image denoising processing as shown in FIG. 3, then specific cropping (the image center is taken as the center of the cropped image, and the cropping size is 300 × 300) is performed to obtain 1 enhanced image.
Step 3-7, reading the texture image, representing the length × width of the texture image as M × N, and taking the upper left corner of the image as a starting point s of each texture imagepFirst, the starting point s is moved transversely, with p equal to 11Sequentially cutting out images with the length m × n of 300 × 300 and the starting point s1The position of the lower 100 pixels is taken as a new starting point spP is 2, and the starting point s is moved transversely2The step length is 100, the image with the length of × and the width of m × n is cut out in sequence, and the starting point s is used2The position of the lower 100 pixels is taken as a new starting point spP is 3, and the starting point s is moved transversely3Sequentially cutting out images with the length of × and the width of m × n with the step of 100, and taking a starting point s3The lower 100 pixels is taken as a starting point spP is 4, and the starting point s is moved transversely4Sequentially cropping images with a length of × and a width of m × n with a step of 100, repeating the steps (a) to (d), and using the previous starting point sp-1The next starting point s is located at the lower 100 pixelspUntil the distance from the starting point to the image edge does not satisfy the m × n cropping condition, and performing specific cropping with the image center as the center of the cropped image and the cropping size of 300 × 300 to obtain sn1And (5) opening and renewing the texture image to achieve the effect of data enhancement.
Step 3-8, reading the texture image, representing the length × width of the texture image as M × N, and taking the upper left corner of the image as a starting point s of each texture imagepWhen p is 1, first move laterallyMoving point s1Sequentially cutting out images with length m × n of 600 × 600 and starting point s1The position of the lower 100 pixels is taken as a new starting point spP is 2, and the starting point s is moved transversely2The step length is 100, the image with the length of × and the width of m × n is cut out in sequence, and the starting point s is used2The position of the lower 100 pixels is taken as a new starting point spP is 3, and the starting point s is moved transversely3Sequentially cutting out images with the length of × and the width of m × n with the step of 100, and taking a starting point s3The lower 100 pixels is taken as a starting point spP is 4, and the starting point s is moved transversely4Sequentially cropping images with a length of × and a width of m × n with a step of 100, repeating the steps (a) to (d), and using the previous starting point sp-1The next starting point s is located at the lower 100 pixelspUntil the distance from the starting point to the image edge does not satisfy the m × n cropping condition, and performing specific cropping with the image center as the center of the cropped image and the cropping size of 300 × 300 to obtain sn2And (5) opening and renewing the texture image to achieve the effect of data enhancement.
3-9, reading the texture image, expressing the length × width of the texture image as M × N, and taking the upper left corner of the image as a starting point s of each texture imagepFirst, the starting point s is moved transversely, with p equal to 11Sequentially cutting out images with the length m × N of 300 × N with the step of 100, and taking a starting point s1The position of the lower 100 pixels is taken as a new starting point spP is 2, and the starting point s is moved transversely2The step length is 100, the image with the length of × and the width of m × n is cut out in sequence, and the starting point s is used2The position of the lower 100 pixels is taken as a new starting point spP is 3, and the starting point s is moved transversely3Sequentially cutting out images with the length of × and the width of m × n with the step of 100, and taking a starting point s3The lower 100 pixels is taken as a starting point spP is 4, and the starting point s is moved transversely4Sequentially cropping images with a length of × and a width of m × n with a step of 100, repeating the steps (a) to (d), and using the previous starting point sp-1The next starting point s is located at the lower 100 pixelspUntil the distance from the starting point to the image edge does not satisfy the m × n cropping condition any more, and then performing cropping by taking the image center as the center of the cropped imageSpecific cut of size 300 × 300, resulting in sn3And (5) opening and renewing the texture image to achieve the effect of data enhancement.
3-10, reading the texture image, expressing the length × width of the texture image as M × N, and taking the upper left corner of the image as a starting point s of each texture imagepFirst, the starting point s is moved transversely, with p equal to 11Sequentially cutting out images with the length and width M × n of M × 300 and the starting point s1The position of the lower 100 pixels is taken as a new starting point spP is 2, and the starting point s is moved transversely2The step length is 100, the image with the length of × and the width of m × n is cut out in sequence, and the starting point s is used2The position of the lower 100 pixels is taken as a new starting point spP is 3, and the starting point s is moved transversely3Sequentially cutting out images with the length of × and the width of m × n with the step of 100, and taking a starting point s3The lower 100 pixels is taken as a starting point spP is 4, and the starting point s is moved transversely4Sequentially cropping images with a length of × and a width of m × n with a step of 100, repeating the steps (a) to (d), and using the previous starting point sp-1The next starting point s is located at the lower 100 pixelspUntil the distance from the starting point to the image edge does not satisfy the m × n cropping condition, and performing specific cropping with the image center as the center of the cropped image and the cropping size of 300 × 300 to obtain sn4And (5) opening and renewing the texture image to achieve the effect of data enhancement.
Step 4, extracting image texture features
And training image texture features through a convolutional neural network model to realize image texture feature extraction. The convolutional neural network model used is any one of the inclusion series, inclusion _ Resnet, Xconcentration, Resnet _ attention, NASNET, EfficientNet and the like.
Step 4-1, adding corresponding labels to the S × k × Sum enhanced images obtained in step 3 to form a data set D, and dividing the data set D into training sets T ═ Ti-S,/i ═ 1,2,3,. and V ═ ViS }, i ═ 1,2, 3. The training set T in the data set D is the original training set source TlAnd training set TlThe related elements are collected after data enhancement, and a verification set V is the originalVerification set source VlAnd verification set VlA set of data-enhanced related elements. Firstly, training texture features through a convolutional neural network model by using a training set T to extract image texture features, training the convolutional neural network model to obtain weights of the convolutional neural network model, simultaneously testing the convolutional neural network model by using a verification set V, and after the accuracy reaches a threshold value Qacc, enabling the Qacc of the experimental data set D to be 98% to obtain a specific acquisition condition CxModel of well-trained convolutional neural network under conditionlocal
Step 4-2, testing the convolutional neural network Model by adopting a verification set VlocalSelecting the S with the best test effectLBook-like, SL< S, corresponding to SLS acquired by corresponding book-like registration acquisitionL× k × Sum texture images are added with corresponding labels to form a data set D ', and the data set D ' is divided into a training set T ' { T }i'/i={1,2,3,...SLV and verification set V ═ Vi'/i={1,2,3,...SLModel using convolutional neural network ModellocalRe-training and testing, and when the accuracy reaches the threshold value Qacc, making the Qacc of the experimental data set D98%, thereby obtaining the specific acquisition condition CxModel of well-trained convolutional neural network under conditionNx';
And 4-3, modifying the convolutional neural network Model, adding a full connection layer on the last layer, changing the activation function into sigmoid to multi-label classification, fixing all the layers in front, retraining and testing the data set D, and adopting a convolutional neural network ModelNxThe weight of the' is used as the initial weight of the modified convolutional neural network model, firstly, a training set T is used for carrying out image texture feature extraction through the training texture feature of the modified convolutional neural network model, the weight of the modified convolutional neural network model is obtained through training of the convolutional neural network model, meanwhile, a verification set V is used for testing the modified convolutional neural network model, after the accuracy rate reaches a threshold value Qacc, the Qacc of the experimental data set D is made to be 98%, and a specific acquisition condition C is obtainedxWell-trained convolutional neural network model underModelNx
And 5, acquiring texture images of each paper of the S-type books under X different specific acquisition conditions, and completing data enhancement and texture feature extraction.
Collecting all S types of books according to the mode of step 2 under X different specific collection conditions C ═ CxTexture images under 1,2,3, X are processed according to the modes from step 3 to step 4 to obtain specific acquisition conditions C ═ { C ═ different from XxVarious convolution neural network models corresponding to | X ═ 1,2,3, · · · ·, X · · |)N={ModelNx|x=1,2,3,····,X}。
In the present embodiment, 9 specific acquisition conditions as shown in fig. 2 are adopted as the specific acquisition conditions.
First specific Collection Condition C1The optical magnification is 16 and the light source environment is backlight, and the texture image acquired correspondingly is represented by an array I1={I1t1,2,3, k, and k 30. The obtained corresponding trained convolutional neural network Model is expressed as a ModelN1
Second specific Collection Condition C2The optical magnification is 16 and the light source environment is sidelight, and the texture image acquired correspondingly is represented as I in an array2={I2t1,2,3, k, and k 30. The obtained corresponding trained convolutional neural network Model is expressed as a ModelN2
Third specific acquisition Condition C3The corresponding collected texture image is expressed by an array I in such a way that the optical magnification is 16 and the light source environment is annular light3={I3t1,2,3, k, and k 30. The obtained corresponding trained convolutional neural network Model is expressed as a ModelN3
Fourth specific Collection Condition C4The optical magnification is 30 and the light source environment is backlight, and the texture image acquired correspondingly is represented by an array I4={I4t1,2,3, k, and k 30. The obtained corresponding trained convolutional neural network Model is expressed as a ModelN4
Fifth specific Collection Condition C5The optical magnification is 30 and the light source environment is sidelight, and the texture image acquired correspondingly is represented as I in an array5={I5t1,2,3, k, and k 30. The obtained corresponding trained convolutional neural network Model is expressed as a ModelN5
Sixth specific Collection Condition C6The corresponding collected texture image is expressed by an array I in such a way that the optical magnification is 30 and the light source environment is annular light6={I6t1,2,3, k, and k 30. The obtained corresponding trained convolutional neural network Model is expressed as a ModelN6
Seventh specific Collection Condition C7The optical magnification is 60 and the light source environment is backlight, and the texture image acquired correspondingly is represented by an array I7={I7t1,2,3, k, and k 30. The obtained corresponding trained convolutional neural network Model is expressed as a ModelN7
Eighth specific Collection Condition C8The optical magnification is 60 and the light source environment is sidelight, and the texture image acquired correspondingly is represented as I in an array8={I8t1,2,3, k, and k 30. The obtained corresponding trained convolutional neural network Model is expressed as a ModelN8
Ninth specific Collection Condition C9The optical magnification is 60 and the light source environment is annular light, and the texture image acquired correspondingly is represented by an array I9={I9t1,2,3, k, and k 30. The obtained corresponding trained convolutional neural network Model is expressed as a ModelN9
Step 6, determining the class label of the paper to be identified
For the paper to be identified, all X specific collection conditions C ═ C are completed in the manner of step 5xThe acquisition of texture images under 1,2,3, X, the same special acquisition condition CxA texture images are collected, wherein A is more than 1. Extracting image texture features according to the step 4, and thenThen using the Model of the corresponding trained convolutional neural network ModelN={ModelNxCarrying out image texture feature classification on | X ═ 1,2,3, · · ·, X } respectively to obtain all X specific acquisition conditions C ═ { C ═ C, X } of the paper to be identifiedxPrediction label and label probability value under 1,2,3, X.
Convolutional neural network Model trained for use of paper to be identifiedNxAnd (4) carrying out image texture feature classification to obtain A prediction labels and label probability values, wherein the weight of each prediction label and the probability value thereof is 1/A. And finally, recalculating the label probability value according to the weight to obtain the final belonging class label of the paper to be identified, wherein the class label is the final identification result.
The recalculating the label probability value according to the weight specifically includes: 1/A is the classification weight of the image under each specific acquisition condition, for example, one paper to be identified, A images are acquired under each specific acquisition condition, then the image prediction label and the probability value under the specific acquisition condition are distributed according to the weight of each 1/A, the prediction label and the probability value under the environment are obtained, in addition, because the embodiment has 9 specific acquisition conditions, each specific acquisition condition is given different weights, and the first specific acquisition condition C1The weight given below is 0.2, and the remaining eight specific acquisition conditions C2-C9And the weight is respectively given to the label and the probability value under the 9 specific acquisition conditions, and the final paper label and the probability value are obtained by weight calculation.
The above embodiments are preferred examples of the present invention, and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A paper making cause inspection and identification method based on deep learning is characterized by comprising the following steps:
step 1: different labels are respectively arranged on the paper of books with different printing modes in advance, the paper of the same book belongs to the same class, one label is adopted for the same book, and the number of the book types is represented as S;
step 2: collecting each type of book paper under a certain collecting condition CxObtaining the k texture images of the next step, and obtaining S × k texture images for the S books;
step 3, performing image data enhancement in multiple modes on each texture image acquired in the step 2, performing image data enhancement on each texture image to obtain Sum enhanced images, and performing image data enhancement on S × k texture images to obtain S × k × Sum enhanced images;
step 4, carrying out image texture feature extraction on the S × k × Sum enhanced images obtained in the step 3 by using convolutional neural network model training to obtain the specific acquisition condition CxModel of well-trained convolutional neural network under conditionNx
And 5: collecting all S types of books according to the mode of step 2 under X different specific collection conditions C ═ CxTexture images under 1,2,3, X are processed according to the modes from step 3 to step 4 to obtain specific acquisition conditions C ═ { C ═ different from XxVarious convolution neural network models corresponding to | X ═ 1,2,3, · · · ·, X · · |)N={ModelNx|x=1,2,3,····,X};
Step 6: for the paper to be identified, all X specific collection conditions C ═ C are completed in the manner of step 5xThe acquisition of texture images under 1,2,3, X, the same special acquisition condition CxCollecting A texture images, extracting image texture features according to the step 4, and using the corresponding trained convolutional neural network ModelN={ModelNxCarrying out image texture feature classification on | X ═ 1,2,3, · · ·, X } respectively to obtain all X specific acquisition conditions C ═ { C ═ C, X } of the paper to be identifiedxAnd (3) predicting the label and the label probability value under 1,2,3, X, and finally recalculating the label probability value according to the weight to obtain the final label of the paper to be authenticated, wherein the final label is used as a final authentication result.
2. The method for identifying a cause of paper production based on deep learning of claim 1, wherein:
wherein a certain specific collection condition CxRefers to the collection condition that satisfies a certain optical magnification and a certain light source environment.
3. The method for identifying a cause of paper production based on deep learning of claim 1, wherein:
wherein, book kind number S satisfies: s is more than or equal to 78;
in the step 2, K texture images are collected at K positions of blank places of each book type, wherein K is more than or equal to 30.
4. The method for identifying a cause of paper production based on deep learning of claim 1, wherein:
and 3, performing data enhancement processing on each texture image in the step 3, wherein the data enhancement processing comprises image scaling, image rotation, image overturning, image contrast enhancement, homomorphic filtering processing and image denoising processing.
5. The deep learning-based paper production cause test identification method according to claim 4, characterized in that:
for each texture image, reading the texture image firstly, embodying the texture image in a vector form, and then performing data enhancement according to the following steps:
image zooming: respectively zooming the vectors by 0.75 times, 0.5 times, 1 time and 2 times in a cutting mode, outputting 4 vectors, restoring the zoomed vectors into a texture image, and carrying out specific cutting to obtain 4 enhanced images;
image rotation: sequentially and averagely rotating the vectors within the range of 0-360 degrees by 15 degrees each time in a rotating manner to obtain 24 vectors with different rotating angles, restoring the rotated vectors into a texture image, and performing specific cutting to obtain 24 enhanced images;
image turning: turning the vectors up and down, turning the vectors left and right, turning the left diagonal and turning the right diagonal in a turning mode to obtain 4 vectors, restoring the scaled vectors into a texture image, and performing specific cutting to obtain 4 enhanced images;
image contrast enhancement: contrast enhancement is realized by a histogram equalization method, vectors are processed to enable the mapping range to be [0, 255], vectors are obtained, the vectors with enhanced contrast are restored into texture images, and specific cutting is carried out to obtain 1 enhanced image;
homomorphic filtering, namely processing the vector by a homomorphic filtering method, appointing a filter function H (u, v) to ensure that rH is 5r L is 0.5 to obtain the vector, restoring the homomorphic filtered vector into a texture image, and performing specific cutting to obtain 1 enhanced image;
and (3) image denoising, namely performing denoising treatment by using any one of a mean filtering method, a Gaussian filtering method, a bilateral filtering method, a guide filtering method, an N L M operator, a BM3D operator, a frequency domain filtering method, a wavelet domain filtering method, a P-M equation denoising method and a TV denoising method to obtain vectors, recovering the vectors into texture images, and performing specific cutting to obtain 1 enhanced image.
6. The deep learning-based paper production cause test identification method according to claim 5, characterized in that:
the data enhancement processing of each texture image in the step 3 further comprises the following steps (a) to (e) sequentially under the clipping conditions of M × n-300 × 300, M × n-600 × 600 and M × n-300 × N, M × n-M × 600:
(a) reading the texture image, expressing the length × width of the texture image as M × N, and taking the upper left corner of the image as a starting point s of each texture imagepFirst, the starting point s is moved transversely, with p equal to 11Sequentially cutting out images with the length of × and the width of m × n with the step of 100;
(b) from a starting point s1The position of the lower 100 pixels is taken as a new starting point spP is 2 and is moved laterallyPoint s2Step is 100, and images with length × and width m × n are cut out in sequence;
(c) from a starting point s2The position of the lower 100 pixels is taken as a new starting point spP is 3, and the starting point s is moved transversely3Sequentially cutting out images with the length of × and the width of m × n with the step of 100;
(d) from a starting point s3The lower 100 pixels is taken as a starting point spP is 4, and the starting point s is moved transversely4Sequentially cutting out images with the length of × and the width of m × n with the step of 100;
(e) repeating the above (a) to (d) using the previous starting point sp-1The next starting point s is located at the lower 100 pixelspUntil the starting point is no longer in the m × n cropping condition from the image edge;
the clipping conditions M × n-300 × 300, M × n-600 × 600, and M × n-300 × N, M × n-M × 600 are respectively obtained by clipping the above-described steps (a) to (e) to obtain Sn1、Sn2、Sn3、Sn4Stretching the new texture image, and performing specific cutting on the new texture image to obtain Sn1+Sn2+Sn3+Sn4An enhanced image;
sum images obtained by the image data enhancement processing in the step 2 of each texture image satisfy the following conditions: sum 4+24+4+1+1+1+ Sn1+Sn2+Sn3+Sn4
7. The paper production cause test identification method based on deep learning according to claim 5 or 6, characterized in that:
wherein the specific cropping is to take the image center as the center of the cropped image, and the cropping size is 300 × 300.
8. The method for identifying a cause of paper production based on deep learning of claim 1, wherein:
wherein, step 4 specifically includes:
step 4-1, adding corresponding labels to the S × k × Sum enhanced images obtained in step 3 to form a data setD, dividing the data set D into a training set T ═ Ti-S,/i ═ 1,2,3,. and V ═ ViAnd (i) {1,2,3,. S }, training texture features by using a training set T through a convolutional neural network model to extract image texture features, training the convolutional neural network model to obtain weights of the convolutional neural network model, testing the convolutional neural network model by using a verification set V, and obtaining a specific acquisition condition C when the accuracy reaches above a threshold value QaccxModel of well-trained convolutional neural network under conditionlocal_x
Step 4-2, testing the convolutional neural network Model by adopting a verification set Vlocal_xSelecting the S with the best test effectLBook-like, SL< S, corresponding to SLS acquired by corresponding book-like registration acquisitionL× k × Sum texture images are added with corresponding labels to form a data set D ', and the data set D ' is used as a training set T ' { T }i'/i={1,2,3,...SLV and verification set V ═ Vi'/i={1,2,3,...SLModel using convolutional neural network ModellocalTraining and testing are carried out again, and when the accuracy reaches the threshold value Qacc or above, a specific acquisition condition C is obtainedxModel of well-trained convolutional neural network under conditionNx';
And 4-3, modifying the convolutional neural network Model, adding a full connection layer on the last layer, changing the activation function into sigmoid to multi-label classification, fixing all the layers in front, retraining and testing the data set D, and adopting a convolutional neural network ModelNxThe weight of the' is used as the initial weight of the modified convolutional neural network model, the training set T is used for carrying out image texture feature extraction through the training texture feature of the modified convolutional neural network model, the weight of the modified convolutional neural network model is obtained through training of the convolutional neural network model, meanwhile, the modified convolutional neural network model is tested through the verification set V, and after the accuracy rate reaches above the threshold value Qacc, the specific acquisition condition C is obtainedxModel of well-trained convolutional neural network under conditionNx
9. The deep learning-based paper production cause test identification method according to claim 8, characterized in that:
wherein the threshold Qacc is 98%.
10. The method for identifying a cause of paper production based on deep learning of claim 1, wherein:
in step 6, the trained convolutional neural network Model is used for the paper to be identifiedNxAnd (4) carrying out image texture feature classification to obtain A prediction labels and label probability values, wherein the weight of each prediction label and the probability value thereof is 1/A.
CN202010245087.4A 2020-03-31 2020-03-31 Paper making cause inspection and identification method based on deep learning Active CN111444866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010245087.4A CN111444866B (en) 2020-03-31 2020-03-31 Paper making cause inspection and identification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010245087.4A CN111444866B (en) 2020-03-31 2020-03-31 Paper making cause inspection and identification method based on deep learning

Publications (2)

Publication Number Publication Date
CN111444866A true CN111444866A (en) 2020-07-24
CN111444866B CN111444866B (en) 2023-05-30

Family

ID=71650878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010245087.4A Active CN111444866B (en) 2020-03-31 2020-03-31 Paper making cause inspection and identification method based on deep learning

Country Status (1)

Country Link
CN (1) CN111444866B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200966A (en) * 2020-09-28 2021-01-08 武汉科技大学 Identification method for RMB paper money forming mode

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102214305A (en) * 2011-04-08 2011-10-12 大连理工大学 Method for taking evidence for source of printing paper sheet by using grain characteristic
CN103310526A (en) * 2012-03-12 2013-09-18 王洪群 Device for identification and anti-counterfeiting distinguishment of common paper and application method of device
CN105336038A (en) * 2015-08-31 2016-02-17 上海古鳌电子科技股份有限公司 Paper processing device, paper classifying device, and paper classifying system
US20160335526A1 (en) * 2014-01-06 2016-11-17 Hewlett-Packard Development Company, L.P. Paper Classification Based on Three-Dimensional Characteristics
CN108416774A (en) * 2018-03-08 2018-08-17 中山大学 A kind of fabric types recognition methods based on fine granularity neural network
CN108427969A (en) * 2018-03-27 2018-08-21 陕西科技大学 A kind of paper sheet defect sorting technique of Multiscale Morphological combination convolutional neural networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102214305A (en) * 2011-04-08 2011-10-12 大连理工大学 Method for taking evidence for source of printing paper sheet by using grain characteristic
CN103310526A (en) * 2012-03-12 2013-09-18 王洪群 Device for identification and anti-counterfeiting distinguishment of common paper and application method of device
US20160335526A1 (en) * 2014-01-06 2016-11-17 Hewlett-Packard Development Company, L.P. Paper Classification Based on Three-Dimensional Characteristics
CN105336038A (en) * 2015-08-31 2016-02-17 上海古鳌电子科技股份有限公司 Paper processing device, paper classifying device, and paper classifying system
CN108416774A (en) * 2018-03-08 2018-08-17 中山大学 A kind of fabric types recognition methods based on fine granularity neural network
CN108427969A (en) * 2018-03-27 2018-08-21 陕西科技大学 A kind of paper sheet defect sorting technique of Multiscale Morphological combination convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘桂雄等: "基于深度学习的机器视觉目标检测算法及在票据检测中应用", 《中国测试》 *
杨明月: "基于深度学习的纹理图像分类方法研究" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200966A (en) * 2020-09-28 2021-01-08 武汉科技大学 Identification method for RMB paper money forming mode

Also Published As

Publication number Publication date
CN111444866B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN109239102B (en) CNN-based flexible circuit board appearance defect detection method
CN108074231B (en) Magnetic sheet surface defect detection method based on convolutional neural network
CN110032938B (en) Tibetan recognition method and device and electronic equipment
CN111257341B (en) Underwater building crack detection method based on multi-scale features and stacked full convolution network
CN103034838B (en) A kind of special vehicle instrument type identification based on characteristics of image and scaling method
CN111310628B (en) Banknote forming mode checking and identifying method based on banknote printing pattern characteristics
CN108256493A (en) A kind of traffic scene character identification system and recognition methods based on Vehicular video
CN105809205B (en) A kind of classification method and its system of high spectrum image
CN112037219A (en) Metal surface defect detection method based on two-stage convolution neural network
CN110991439A (en) Method for extracting handwritten characters based on pixel-level multi-feature joint classification
CN113688821B (en) OCR text recognition method based on deep learning
CN113393438B (en) Resin lens defect detection method based on convolutional neural network
CN111179263A (en) Industrial image surface defect detection model, method, system and device
CN110751644A (en) Road surface crack detection method
CN111369526A (en) Multi-type old bridge crack identification method based on semi-supervised deep learning
Yingthawornsuk et al. Automatic Thai Coin Calculation System by Using SIFT
CN116245882A (en) Circuit board electronic element detection method and device and computer equipment
CN115100656A (en) Blank answer sheet identification method, system, storage medium and computer equipment
CN111444866A (en) Paper making cause inspection and identification method based on deep learning
CN112614113A (en) Strip steel defect detection method based on deep learning
CN112950566B (en) Windshield damage fault detection method
CN115512230A (en) Multi-scale fusion asphalt pavement crack identification method based on multi-head attention
CN112699898B (en) Image direction identification method based on multi-layer feature fusion
Premk et al. Automatic latent fingerprint segmentation using convolutional neural networks
CN113592850A (en) Defect detection method and device based on meta-learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhu Ziqi

Inventor after: Lu Qi

Inventor before: Zhu Ziqi

GR01 Patent grant
GR01 Patent grant