CN109636792A - A kind of defect of lens detection method based on deep learning - Google Patents
A kind of defect of lens detection method based on deep learning Download PDFInfo
- Publication number
- CN109636792A CN109636792A CN201811533354.7A CN201811533354A CN109636792A CN 109636792 A CN109636792 A CN 109636792A CN 201811533354 A CN201811533354 A CN 201811533354A CN 109636792 A CN109636792 A CN 109636792A
- Authority
- CN
- China
- Prior art keywords
- layer
- network structure
- file
- image
- layers
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 35
- 230000007547 defect Effects 0.000 title claims abstract description 24
- 238000013135 deep learning Methods 0.000 title claims abstract description 18
- 238000000034 method Methods 0.000 claims abstract description 25
- 238000003466 welding Methods 0.000 claims description 23
- 230000013016 learning Effects 0.000 claims description 20
- 238000000576 coating method Methods 0.000 claims description 17
- 229910002056 binary alloy Inorganic materials 0.000 claims description 13
- 238000012360 testing method Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 7
- 239000011248 coating agent Substances 0.000 claims description 5
- 238000012795 verification Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 2
- 238000004458 analytical method Methods 0.000 abstract description 5
- 238000012549 training Methods 0.000 abstract description 4
- 238000004519 manufacturing process Methods 0.000 description 4
- 239000011521 glass Substances 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000007689 inspection Methods 0.000 description 2
- 230000001681 protective effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000002893 slag Substances 0.000 description 1
- 238000005476 soldering Methods 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Quality & Reliability (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The defect of lens detection method based on deep learning that the invention discloses a kind of, the offline picture library of training sample is acquired first, and caffe model and ARM-NN SDK kit are utilized in host computer, generate network structure corresponding with characteristics of image in picture library, different protobuf dynamic link libraries is generated using network structure, when actually detected, slave computer is directly different to call protobuf dynamic link library, implementation analysis is carried out to the image of acquisition, judges eyeglass with the presence or absence of defect characteristic;Realize the automatic detection to lens image to be measured;By carrying out different classification to characteristics of image, false detection rate is reduced, and realize the detection to whole picture lens image to be measured, meanwhile, it is handled by slave computer analysis, compared to the method that image transmitting to host computer is analyzed, saves equipment and wiring bring cost.
Description
Technical field
The present invention relates to field of visual inspection, and in particular to a kind of defect of lens detection method based on deep learning.
Background technique
Eyeglass is widely used in actual life and industrial circle, and carrying out intellectualized detection to the defect of eyeglass has very extensively
Wealthy application prospect.
It is detected as example with Laser Welding protection eyeglass in industrial production, laser welding is being welded compared with conventional soldering techniques
Precision, efficiency, reliability, automation everyway have unrivaled superiority.With Hyundai Motor manufacture demand
It is promoted, it is also higher and higher to the degree of dependence of laser welding.When weld job certain time, the transparent mirror of welding machine is protected
Piece is easy to cause to burst because of the splashing of high temperature welding slag, so that welding quality is influenced, or even damage welding machine or robot, therefore, to sharp
Flush weld protects the effectively detection of eyeglass most important.
There are two types of the schemes detected at this stage for mirror surface, is first the worker of line downstream to carry out artificial detection,
The method of eyeglass is protected by manual inspection laser welding head to prevent the welding accident generated by protection defect of lens.But work
People has found that Welding Problems have certain hysteresis quality, and worker often carries out it after laser welding in white body on production line
It is whole to check, once midway mirror surface damages, it would be possible to and cause a large amount of bad weldering white bodies to re-start repair welding, it is raw to reduce automobile
Efficiency is produced, manufacturing cost is improved.Second of existing scheme is then by traditional mirror defects detection sensor, by mirror surface
Image acquires the image that Laser Welding protects eyeglass real time status by camera, judges that eyeglass whether there is defect by image, this
Kind method high degree of automation, strong real-time will be widely welcomed, the processing method of the protection lens image are as follows: extract first whole
The round visual field that lens image is protected in width image, judges whether bad point area is more than or equal to the customized bad point pixel number of user,
If bad point area is less than the customized bad point pixel number of user, prompt protection eyeglass normal;It is used if bad point area is more than or equal to
The customized bad point pixel number in family then marks bad point profile, prompts protection eyeglass abnormal.
Pixel grey scale and quantity is only easily recognized in the above method, can not analyze whether real bad point;In practical weldering
The case where the problem of connecing scene, eyeglass is protected to be likely to occur situation is more complicated, not only occurs bad point, it is also possible to yin occur
Situations such as shadow, fuzzy, offset;
If the light of welding production line changes greatly, it is easy because reflection and vibration aggravate the edge shadow of protection eyeglass,
Detection sensor will lead to using conventional images detection method and the case where erroneous detection occur, the dark color when shade aggravates, in visual field
Pixel can cause the alarm of mistake because aggregation to a certain extent, is mistakenly considered bad point by system;Meanwhile existing method intercepts
Round field of view does not detect the region beyond round visual field, reduces measurement range, valid data is caused to waste.
Summary of the invention
The defect of lens detection method based on deep learning that in order to solve the above technical problem, the present invention provides a kind of, this
Method realizes the whole detection to Laser Welding protection lens image, and can have by deep learning training network model
Effect identification shadow condition, judges the quality of image, reduces false detection rate.
Technical solution is as follows:
A kind of defect of lens detection method based on deep learning, comprising the following steps:
No less than 1000 lens images to be measured are stored as offline picture library by step 1), and the offline picture library is inputted
Into lmdb database, according to caffe model, is classified by characteristics of image to each lens image to be measured and add correspondence
Feature tag, establish the corresponding relationship between lens image and feature tag to be measured;A number scale of the feature tag type
For n;
All lens images to be measured with same characteristic features label are replicated in the same subclassification picture library;
Step 2) successively carries out equalization operation, often to all lens images to be measured in each subclassification picture library
Height is classified, and picture library is corresponding to generate an image mean value binary file;
Step 3) compiles rule according to Makefile predetermined, and described image mean value binary file is sequentially input
To ARM-NN SDK kit, ARM-NN SDK kit compiles rule according to the Makefile and executes make compiling, each
Image mean value binary file is corresponding to generate one group of intermediate file comprising compileable file;
Step 4) is according to the configuration information to the pre-set prototxt file of ARM-NN SDK kit, by step 3)
Obtained intermediate file is corresponding to generate network structure, the network structure include with the convolutional layer of certain rule connection, pond layer,
Active coating, full articulamentum, softmax layers, drop layers and output layer;
The intermediate file shares n, and corresponding network structure also has n, is denoted as DLi, i=1,2,3 ... n;
Step 5) is to each network structure DLiIt is middle to input in subclassification picture library corresponding with each network structure respectively
All lens images to be measured;
According to the configuration information of pre-set prototxt file, the network structure is iterated, iteration is always secondary
Number S value 8000~20000, every iteration q times reduce learning rate, complete to iteration total degree;
After the completion of iteration, according to the verification information in softmax layers, network structure DL is obtainedi' accuracy;
When accuracy reaches 90%, by the network structure DL Jing Guo iterationi' it is stored in binary system protobuf file
In;If accuracy < 90% resurveys offline picture library, return step 1);
The n network structure is iterated respectively, obtains n binary system protobuf file;
Or it adopts and is iterated in the following method:
According to the configuration information of pre-set prototxt file, network structure is iterated, iteration total degree S takes
Value 8000~20000, every iteration q times obtain the accuracy of Exist Network Structure simultaneously according to the verification information in softmax layers
Learning rate is reduced, iteration is continued;When accuracy reaches 90%, iteration terminates, by the network structure by iteration
DLi' parameter be stored in binary system protobuf file;If after the completion of iteration total degree, accuracy < 90% is then adopted again
Collect offline picture library, return step 1);
The n network structure is iterated respectively, obtains n binary system protobuf file;
The n binary system protobuf file is loaded into ARM-NN SDK kit by step 6), will be described
Software compiler tools g++ in Makefile compiling rule is changed to g++ corresponding to target microprocessor framework, and the software is compiled
Tool g++ is translated for file to be compiled into executable program;
The target processor is the framework of slave computer, and common processor architecture has: x86, ARM, PowerPC etc., this
In invention, detection is ARMv7 32bit with processor architecture;
G++ between different microprocessor versions be it is not compatible, i.e., compiling ARM executable program just need using
The g++ tool of ARM;
Rule is compiled according to other Makefile in step 3), starts cross compile, corresponding n protobuf of generation is dynamic
State chained library;
The protobuf dynamic link library can be called directly by C++ program;
Step 7) slave computer main program loads the protobuf dynamic link library when running, to the mirror to be measured acquired in real time
Picture is handled, and testing result corresponding with protection lens image feature is exported.
Further, in the ARM-NN SDK kit, Makefile compiling rule includes defining the road of engineering source code
Diameter, the path for generating file, Software compiler tools g++;Specified target microprocessor framework (ARCH) is " arm ", compilation tool
Chain version (CC) is " arm-linux-gnueabihf-gcc ", is incorporated into source file required for protobuf, linux kernel version
Sheet and chained library.
Further, the prototxt file is used for allocation models parameter, is configured, is compiled before step 3);
The configuration information of the prototxt file includes:
To in network structure each in step 4) convolutional layer, pond layer, active coating, full articulamentum, softmax layers,
Drop layers, the setting of the number of plies of output layer, connection relationship;Specifically, to each layer of name, input and output, obtaining variable
The definition of value;By taking convolutional layer as an example, the name and characteristic, the input and output of this layer, acquisition variate-value, convolution kernel of this layer are defined
Size and the information such as weight matrix.
Setting to iteration the total degree S, number q and each learning rate of each network structure in step 5).
It is preferred that step 5) in the prototxt file) in each network structure be provided that iteration total degree S
For 10000 times, the setting that number q is 1000 times and each learning rate: the selection range of learning rate initial value 0.01~
Between 0.8, every iteration q times reduces 50% before learning rate arrives.
Learning rate according to picture complexity and sample size and setting value is different, learning rate be arranged too small, convergence rate
Can be very slow, learning rate setting is excessive, then can be unable to reach optimum point.
Rule of thumb it is arranged: tends to choose stability of the lesser learning rate to guarantee system under normal circumstances, learns
The selection range of rate initial value is practised between 0.01~0.8, it is preferable that learning rate initial value is 0.01;Every iteration q times, subtracts
50% before learning rate arrives less.
Further, feature tag described in the step 1) is arranged 4, comprising: with/without bad point, with/without shade, Yes/No
Fuzzy, Yes/No offset;
Offline picture library is classified as 4 son classification picture libraries according to 4 different characteristic labels;
At this point, corresponding in step 4) generate 4 network structures, it is denoted as respectively: for judging the network knot with/without bad point
Structure DL1, for judging the network structure DL with/without shade2, for judging the fuzzy network structure DL of Yes/No3, for judging
The network structure DL of Yes/No offset4。
It is preferred that the prototxt file is configured that network structure each in step 4)
For judging the network structure DL with/without bad point1Total number of plies be 16, including 1 input layer, 3 convolutional layers, 3
A pond layer, 4 active coatings, 2 full articulamentums, 1 softmax layers, 1 drop layers and 1 output layer, the order of connection are as follows:
Input layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond layer-connects entirely
- drop layers of layer-active coating-- softmax layers-output layer of full articulamentum;
The network structure DL for judging with/without shade2Total number of plies be 16, including 1 input layer, 3 convolution
Layer, 3 pond layers, 4 active coatings, 2 full articulamentums, 1 softmax layers, 1 drop layers and 1 output layer, the order of connection
Are as follows: input layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond layer-Quan Lian
Connect layer-- drop layers of active coating-- softmax layers-output layer of full articulamentum;
The network structure DL obscured for judging Yes/No3Total number of plies be 13, including 1 input layer, 2 convolution
Layer, 2 pond layers, 3 active coatings, 2 full articulamentums, 1 softmax layers, 1 drop layers and 1 output layer, the order of connection
Are as follows: full articulamentum-- drop layers of the active coating-of input layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond layer-is complete
- softmax layers-output layer of articulamentum;
The network structure DL obscured for judging Yes/No4Total number of plies be 13, including 1 input layer, 2 convolution
Layer, 2 pond layers, 3 active coatings, 2 full articulamentums, 1 softmax layers, 1 drop layers and 1 output layer, the order of connection
Are as follows: full articulamentum-- drop layers of the active coating-of input layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond layer-is complete
- softmax layers-output layer of articulamentum.
Convolutional layer mainly passes through convolution kernel extraction feature, each characteristic value is modulus plate and original image respective pixel
The sum of products.
Pond layer mainly reduces the spatial resolution of convolutional layer by down-sampling.
Active coating is to carry out activation operation to the data of input, and even each data element is eligible, then data are swashed
It is living, transmit it to next layer, otherwise not being passed then.
Drop layers are to inhibit at random to some neurons, are at unactivated state.
Softmax layers are mainly normalized operation to the data after full connection and make its range between [0,1],
It can be derived that the accuracy of Exist Network Structure after each iteration.
Further, the lens image uniform sizes to be measured that will be acquired in real time first in carrying out step 7), make finally to divide
The resolution ratio of the real-time testing image of analysis is consistent with the image in offline picture library;
Real-time testing image is loaded into slave computer main program, main program calls corresponding protobuf dynamic link library;
The testing result finally exported in step 7) are as follows: with/without bad point image, with/without shadow image, Yes/No fuzzy graph
Picture, Yes/No migrated image.
Further, the picture in offline picture library is that robot drives welding gun to carry out camera acquisition in Laser Welding welding process
The image of laser welding gun protection eyeglass.
Advantage:
The method of the present invention utilizes ARM-NN SDK kit and caffe deep learning structure, is directed in host computer training
The network structure of lens features to be measured generates the dynamic link library that slave computer can call, and slave computer calls the dynamic link library,
Analysis whole lens image to be measured of processing, identifies the characteristic information in image, realizes that the automation to lens image to be measured is examined
It surveys;By carrying out different classification to characteristics of image, false detection rate is reduced, and realize the inspection to whole picture lens image to be measured
It surveys, meanwhile, it is handled by slave computer analysis, compared to the method that image transmitting to host computer is analyzed, saves equipment and wiring tape comes
Cost.
Detailed description of the invention
Fig. 1 is the flow diagram of the embodiment of the present invention;
Fig. 2 is that the Laser Welding obtained using existing method detects the testing result figure of eyeglass;
Fig. 3 is that the Laser Welding obtained using the method for the present invention detects the testing result figure of eyeglass.
Specific embodiment
Below in conjunction with the detailed process for carrying out defects detection to Laser Welding protection eyeglass is enumerated, to technical solution of the present invention
It is described in detail.
A kind of defect of lens detection method based on deep learning is generated in host computer using ARM-NN SDK kit
Protobuf dynamic link library calls protobuf dynamic link library in slave computer, judges to protect eyeglass with the presence or absence of defect
Feature;
The following steps are included: drive welding gun to carry out in laser welding processes in robot, and laser welding processes of every progress, phase
Machine acquires a laser welding gun protective glass picture;2000 laser welding gun protective glass pictures are acquired, offline picture library is stored as;
Step 1) classifies offline picture library:
In the lmdb database that offline picture library is stored, according to caffe model, offline picture library is carried out by characteristics of image
Classification is the addition feature tag corresponding with characteristics of image of each picture;
Characteristics of image/feature tag include: with/without bad point, with/without shade, Yes/No is fuzzy, Yes/No offset;N=4;
Offline picture library is classified as four son classification picture libraries according to different characteristic label, there is same characteristic features label by all
Image be replicated in one son classification picture library in;
Configuration prototxt file simultaneously compiles;
The information of the configuration of prototxt file includes:
Further, each network structure of prototxt file configuration is configured that
For judging that total number of plies of the network structure DL1 with/without bad point is 16, respectively by 1 input layer, 3 convolutional layers,
3 pond layers, 4 active coatings, 2 full articulamentums, 1 softmax layers, 1 drop layers are formed by connecting with 1 output layer, even
Connect sequence are as follows: input layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond
Layer-- softmax layers-output layer of the complete full articulamentum of articulamentum-- drop layers of active coating-;
For judging that total number of plies of the network structure DL2 with/without shade is 16, respectively by 1 input layer, 3 convolutional layers,
3 pond layers, 4 active coatings, 2 full articulamentums, 1 softmax layers, 1 drop layers are formed by connecting with 1 output layer, even
Connect sequence are as follows: input layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond
Layer-- softmax layers-output layer of the complete full articulamentum of articulamentum-- drop layers of active coating-;
For judging total number of plies of the fuzzy network structure DL3 of Yes/No for 13, by 1 input layer, 2 convolutional layers, 2
Pond layer, 3 active coatings, 2 full articulamentums, 1 softmax layers, 1 drop layers are formed by connecting with 1 output layer, connect suitable
Sequence are as follows: full articulamentum-- drop layers of the active coating-of input layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond layer-
- softmax layers-output layer of full articulamentum;
For judging total number of plies of the fuzzy network structure DL4 of Yes/No for 13, by 1 input layer, 2 convolutional layers, 2
Pond layer, 3 active coatings, 2 full articulamentums, 1 softmax layers, 1 drop layers are formed by connecting with 1 output layer, connect suitable
Sequence are as follows: full articulamentum-- drop layers of the active coating-of input layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond layer-
- softmaxloss layers-output layer of full articulamentum.
Iteration total degree S to each network structure is 10000 times, the selection that number q is 1000 times, learning rate initial value
0.01。
Step 2) generates mean value binary file:
To every protection eyeglass characteristic image in different subclassification picture libraries, equalization operation is successively carried out, generates four
The image mean value binary file of the different subclassification picture library of a correspondence;
Image mean value binary file contains in corresponding subclassification picture library characteristics of image institute in every protection lens image
Location information;
Step 3) generates compileable intermediate file:
Define Makefile compiling rule: in the ARM-NN SDK kit, Makefile compiling rule includes fixed
The path of volunteer's journey source code, the path for generating file, Software compiler tools g++;Specified target microprocessor framework (ARCH) is
" arm ", Compile toolchain version (CC) are " arm-linux-gnueabihf-gcc ", are incorporated into source document required for protobuf
Part, linux kernel version and chained library;
Rule is compiled according to the Makefile of definition, described image mean value binary file is sequentially inputted to ARM-NN
SDK kit, ARM-NN SDK kit compiles rule according to the Makefile and executes make compiling, corresponding to generate in four groups
Between file;
One group of intermediate file includes two compileable files, suffix entitled * .pb.cc and * .pb.h, two compileable files
It is compiled for subsequent step;
Step 4) constructs model:
According to the configuration information of pre-set prototxt file, to four groups of intermediate files successively perform script tool,
It is corresponding to generate four network structures, it is denoted as respectively: for judging the network structure DL with/without bad point1, for judging Yes/No mould
The network structure DL of paste3;For judging the network structure DL with/without shade2, for judge Yes/No offset network structure DL4;
Network structure include sequentially connected convolutional layer, pond layer, active coating, full articulamentum, softmax layers, drop layers,
Output layer;
Step 5), model training:
The net for being used to judge with/without bad point will be input to with/without multiple protection lens images in bad point subclassification picture library
Network structure DL1, Yes/No obscured to multiple protection lens images in subclassification picture library be input to and be used to judge that Yes/No to be fuzzy
Network structure DL3;It is used to judge with/without shade by being input to with/without multiple protection lens images in shade subclassification picture library
Network structure DL2, Yes/No is deviated into multiple protection lens images in subclassification picture library be input to and be used to judge that Yes/No to be inclined
The network structure DL of shifting4;
According to the configuration information of pre-set prototxt file, each network structure is iterated, iteration is primary,
Traverse all input pictures inside the network structure;It completes until iteration 10000 times, is believed according to the verifying in softmax layers
Breath, obtains the accuracy of network structure;
Every iteration 1000 times reduces by 50% before learning rate arrives;
When accuracy reaches 90%, the parameter of the network structure Jing Guo iteration is stored in binary system protobuf file
In;
Four network structures are iterated respectively, obtain four binary system protobuf files;
If accuracy < 90% resurveys offline picture library, return step 1);
Step 6) generates protobuf dynamic link library:
G++ in former Makefile is changed to the g++ on target microprocessor framework;Software compiler tools g++ is used for handle
File is compiled into executable program;The link path of specified dynamic link library file;
Four binary system protobuf files are loaded into ARM-NN SDK kit, cross compile, corresponding life are started
At four ARMv7-A frameworks, 32 protobuf dynamic link libraries;
Protobuf dynamic link library can be called directly by C++ program;
Step 7), slave computer judge characteristics of image:
Slave computer main program loads protobuf dynamic link library, handles the protection lens image acquired in real time,
The lens image uniform sizes to be measured that will be acquired in real time, the resolution ratio for the protection lens image acquired in real time for making finally to be analyzed
It is consistent with the image in offline picture library;
The protection lens image acquired in real time is loaded into slave computer main program, main program calls corresponding protobuf dynamic
State chained library;Slave computer main program analyzes input picture, the corresponding testing result of output: with/without bad point image,
With/without shadow image, Yes/No blurred picture, Yes/No migrated image.
Fig. 2 is that the Laser Welding obtained using existing method detects the testing result figure of eyeglass;Number 1 is detection side in figure
Edge, number 101 are shadow region, and number 102 is the bad point of delineation;It can be seen that dash area in image can be known using existing method
Not Wei bad point, there is mistake in judging result.
Fig. 3 is that the Laser Welding obtained using the method for the present invention detects the testing result figure of eyeglass, and number 2 is detection side in figure
Edge, number 201 are shadow region, and number 202 is the bad point of delineation;It can be seen that will not will be in image using method provided by the invention
Dash area is identified as bad point, and judging result is accurate.
The present invention is based on deep learnings to realize that the defect recognition to eyeglass, detection, embodiment provide Laser Welding protection eyeglass
Detection process, the present invention not with specific implementation process be limit, this method can be also used for detection glass planar, mobile phone screen
Etc. mirror defects.
Claims (8)
1. a kind of defect of lens detection method based on deep learning, it is characterised in that: the following steps are included:
No less than 1000 lens images to be measured are stored as offline picture library by step 1), and the offline picture library is input to
In lmdb database, according to caffe model, each lens image to be measured is classified and added corresponding by characteristics of image
Feature tag establishes the corresponding relationship between lens image and feature tag to be measured;The number of the feature tag type is denoted as
n;
All lens images to be measured with same characteristic features label are replicated in the same subclassification picture library;
Step 2) successively carries out equalization operation, every height to all lens images to be measured in each subclassification picture library
Picture library of classifying is corresponding to generate an image mean value binary file;
Step 3) compiles rule according to Makefile predetermined, and described image mean value binary file is sequentially inputted to
ARM-NN SDK kit, ARM-NN SDK kit compile rule according to the Makefile and execute make compiling, Mei Getu
One group of intermediate file comprising compileable file is generated as mean value binary file is corresponding;
Step 4) obtains step 3) according to the configuration information to the pre-set prototxt file of ARM-NN SDK kit
Intermediate file it is corresponding generate network structure, the network structure includes with the convolutional layer of certain rule connection, pond layer, activates
Layer, full articulamentum, softmax layers, drop layers and output layer;
The intermediate file shares n, and corresponding network structure also has n, is denoted as DLi, i=1,2,3 ... n;
Step 5) is to each network structure DLiIt is middle to input owning in subclassification picture library corresponding with each network structure respectively
Lens image to be measured;
According to the configuration information of pre-set prototxt file, the network structure is iterated, iteration total degree S takes
Value 8000~20000, every iteration q times reduce learning rate, complete to iteration total degree;
After the completion of iteration, according to the verification information in softmax layers, network structure DL is obtainedi' accuracy;
When accuracy reaches 90%, by the network structure DL Jing Guo iterationi' be stored in binary system protobuf file;If just
True rate < 90%, then resurvey offline picture library, return step 1);
The n network structure is iterated respectively, obtains n binary system protobuf file;
Alternatively,
According to the configuration information of pre-set prototxt file, network structure is iterated, iteration total degree S value
8000~20000, every iteration q times obtains the accuracy and drop of Exist Network Structure according to the verification information in softmax layers
Low learning rate, continues iteration;When accuracy reaches 90%, iteration terminates, by the network structure by iteration
DLi' parameter be stored in binary system protobuf file;If after the completion of iteration total degree, accuracy < 90% is then adopted again
Collect offline picture library, return step 1);
The n network structure is iterated respectively, obtains n binary system protobuf file;
The n binary system protobuf file is loaded into ARM-NN SDK kit by step 6), by the Makefile
Software compiler tools g++ in compiling rule is changed to the g++ on target microprocessor framework, and the Software compiler tools g++ is used
In file is compiled into executable program;
Rule is compiled according to other Makefile in step 3), starts cross compile, it is corresponding to generate n protobuf dynamic chain
Connect library;
The protobuf dynamic link library can be called directly by C++ program;
Step 7) slave computer main program loads the protobuf dynamic link library when running, to the eyeglass figure to be measured acquired in real time
As being handled, testing result corresponding with protection lens image feature is exported.
2. a kind of defect of lens detection method based on deep learning as described in claim 1, it is characterised in that: in the ARM-
In NN SDK kit, Makefile compiling rule includes the path for defining engineering source code, the path for generating file, software translating
Tool g++;Specified target microprocessor framework is " arm ", Compile toolchain version is " arm-linux-gnueabihf-
Gcc ", source file, linux kernel version required for protobuf and chained library are incorporated into.
3. a kind of defect of lens detection method based on deep learning as described in claim 1, it is characterised in that: described
Prototxt file is used for allocation models parameter;
The configuration information of the prototxt file includes:
To in network structure each in step 4) convolutional layer, pond layer, active coating, full articulamentum, softmax layers, drop layers,
The setting of the number of plies, connection relationship of output layer;
Setting to iteration the total degree S, number q and each learning rate of each network structure in step 5).
4. a kind of defect of lens detection method based on deep learning as claimed in claim 3, it is characterised in that:
In the prototxt file to network structure each in step 5) be provided that iteration total degree S be 10000 times,
Number q is the setting of 1000 times and each learning rate: the selection range of learning rate initial value often changes between 0.01~0.8
For q times, 50% before learning rate arrives is reduced.
5. a kind of defect of lens detection method based on deep learning as described in claim 1, it is characterised in that: the eyeglass is
Laser Welding protects eyeglass, and feature tag described in step 1) is arranged 4, comprising: with/without bad point, with/without shade, Yes/No mould
Paste, Yes/No offset;
Offline picture library is classified as 4 son classification picture libraries according to 4 different characteristic labels;
It is corresponding in step 4) to generate 4 network structures, it is denoted as respectively: for judging the network structure DL with/without bad point1, be used for
Judge the network structure DL with/without shade2, for judging the fuzzy network structure DL of Yes/No3, for judge Yes/No offset
Network structure DL4。
6. a kind of defect of lens detection method based on deep learning as claimed in claim 5, it is characterised in that:
The prototxt file is configured that network structure each in step 4)
For judging the network structure DL with/without bad point1Total number of plies be 16, including 1 input layer, 3 convolutional layers, 3 ponds
Layer, 4 active coatings, 2 full articulamentums, 1 softmax layers, 1 drop layers and 1 output layer, the order of connection are as follows: input layer-
Full articulamentum-the activation of convolutional layer-active coating-pond layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond layer-
- drop layers of layer-- softmax layers-output layer of full articulamentum;
The network structure DL for judging with/without shade2Total number of plies be 16, including 1 input layer, 3 convolutional layers, 3
Pond layer, 4 active coatings, 2 full articulamentums, 1 softmax layers, 1 drop layers and 1 output layer, the order of connection are as follows: defeated
Enter the full articulamentum-of layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond layer-
- drop layers of active coating-- softmax layers-output layer of full articulamentum;
The network structure DL obscured for judging Yes/No3Total number of plies be 13, including 1 input layer, 2 convolutional layers, 2
Pond layer, 3 active coatings, 2 full articulamentums, 1 softmax layers, 1 drop layers and 1 output layer, the order of connection are as follows: defeated
Enter full articulamentum-- drop layers of the active coating-of layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond layer-to connect entirely
- softmax layers-output layer of layer;
The network structure DL obscured for judging Yes/No4Total number of plies be 13, including 1 input layer, 2 convolutional layers, 2
Pond layer, 3 active coatings, 2 full articulamentums, 1 softmax layers, 1 drop layers and 1 output layer, the order of connection are as follows: defeated
Enter full articulamentum-- drop layers of the active coating-of layer-convolutional layer-active coating-pond layer-convolutional layer-active coating-pond layer-to connect entirely
- softmax layers-output layer of layer.
7. a kind of defect of lens detection method based on deep learning as described in claim 5 or 6, it is characterised in that: carrying out
The lens image uniform sizes to be measured that will be acquired in real time first in step 7) make the resolution of real-time testing image finally to be analyzed
Rate is consistent with the image in offline picture library;
Real-time testing image is loaded into slave computer main program, main program calls corresponding protobuf dynamic link library;
The testing result finally exported in step 7) are as follows: with/without bad point image, with/without shadow image, Yes/No blurred picture,
Yes/No migrated image.
8. a kind of defect of lens detection method based on deep learning as claimed in claim 5, it is characterised in that: in offline picture library
Picture be robot drive welding gun carry out Laser Welding welding process in camera acquisition laser welding gun protect eyeglass image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811533354.7A CN109636792B (en) | 2018-12-14 | 2018-12-14 | Lens defect detection method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811533354.7A CN109636792B (en) | 2018-12-14 | 2018-12-14 | Lens defect detection method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109636792A true CN109636792A (en) | 2019-04-16 |
CN109636792B CN109636792B (en) | 2020-05-22 |
Family
ID=66073994
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811533354.7A Active CN109636792B (en) | 2018-12-14 | 2018-12-14 | Lens defect detection method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109636792B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017040600A (en) * | 2015-08-21 | 2017-02-23 | キヤノン株式会社 | Inspection method, inspection device, image processor, program and record medium |
CN107103235A (en) * | 2017-02-27 | 2017-08-29 | 广东工业大学 | A kind of Android malware detection method based on convolutional neural networks |
CN107292333A (en) * | 2017-06-05 | 2017-10-24 | 浙江工业大学 | A kind of rapid image categorization method based on deep learning |
CN107643296A (en) * | 2017-07-21 | 2018-01-30 | 易思维(天津)科技有限公司 | Defect detection method and device for laser welding protective lens on automobile production line |
CN108631727A (en) * | 2018-03-26 | 2018-10-09 | 河北工业大学 | A kind of solar panel defect identification method based on convolutional neural networks |
-
2018
- 2018-12-14 CN CN201811533354.7A patent/CN109636792B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017040600A (en) * | 2015-08-21 | 2017-02-23 | キヤノン株式会社 | Inspection method, inspection device, image processor, program and record medium |
CN107103235A (en) * | 2017-02-27 | 2017-08-29 | 广东工业大学 | A kind of Android malware detection method based on convolutional neural networks |
CN107292333A (en) * | 2017-06-05 | 2017-10-24 | 浙江工业大学 | A kind of rapid image categorization method based on deep learning |
CN107643296A (en) * | 2017-07-21 | 2018-01-30 | 易思维(天津)科技有限公司 | Defect detection method and device for laser welding protective lens on automobile production line |
CN108631727A (en) * | 2018-03-26 | 2018-10-09 | 河北工业大学 | A kind of solar panel defect identification method based on convolutional neural networks |
Also Published As
Publication number | Publication date |
---|---|
CN109636792B (en) | 2020-05-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110659660B (en) | Automatic optical detection classification equipment using deep learning system and training equipment thereof | |
US20210390678A1 (en) | Method for monitoring manufacture of assembly units | |
CN111179251A (en) | Defect detection system and method based on twin neural network and by utilizing template comparison | |
CN107179324B (en) | The methods, devices and systems of testing product packaging | |
CN107945184A (en) | A kind of mount components detection method positioned based on color images and gradient projection | |
CN109060817B (en) | Artificial intelligence reinspection system and method thereof | |
CN110648305A (en) | Industrial image detection method, system and computer readable recording medium | |
CN109840900A (en) | A kind of line detection system for failure and detection method applied to intelligence manufacture workshop | |
CN111712769A (en) | Method, apparatus, system, and program for setting lighting condition, and storage medium | |
CN115619787B (en) | UV glue defect detection method, system, equipment and medium | |
CN111626995B (en) | Intelligent insert detection method and device for workpiece | |
TW200844426A (en) | On-line mechanical visional inspection system of an object and method thereof | |
CN111060518A (en) | Stamping part defect identification method based on instance segmentation | |
CN116348897A (en) | Identification and ranking system for collectible items and related methods | |
CN112199295B (en) | Spectrum-based deep neural network defect positioning method and system | |
US20220284699A1 (en) | System and method of object detection using ai deep learning models | |
CN109636792A (en) | A kind of defect of lens detection method based on deep learning | |
CN110135523A (en) | Method for building up, system, storage medium and the electronic equipment of rebuilding spectrum model | |
US10241000B2 (en) | Method for checking the position of characteristic points in light distributions | |
CN113465505B (en) | Visual detection positioning system and method | |
WO2023053029A1 (en) | Method for identifying and characterizing, by means of artificial intelligence, surface defects on an object and cracks on brake discs subjected to fatigue tests | |
Kefer et al. | An intelligent robot for flexible quality inspection | |
CN117953312B (en) | Part detection method, device, equipment and storage medium based on visual recognition | |
Devasena et al. | AI-Based Quality Inspection of Industrial Products | |
CN116678898B (en) | Generalized wafer defect detection method, system, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder |
Address after: Room 495, building 3, 1197 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province 310051 Patentee after: Yi Si Si (Hangzhou) Technology Co.,Ltd. Address before: Room 495, building 3, 1197 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province 310051 Patentee before: ISVISION (HANGZHOU) TECHNOLOGY Co.,Ltd. |
|
CP01 | Change in the name or title of a patent holder |