CN111192237B - Deep learning-based glue spreading detection system and method - Google Patents

Deep learning-based glue spreading detection system and method Download PDF

Info

Publication number
CN111192237B
CN111192237B CN201911292122.1A CN201911292122A CN111192237B CN 111192237 B CN111192237 B CN 111192237B CN 201911292122 A CN201911292122 A CN 201911292122A CN 111192237 B CN111192237 B CN 111192237B
Authority
CN
China
Prior art keywords
layer
convolution
image
gluing
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911292122.1A
Other languages
Chinese (zh)
Other versions
CN111192237A (en
Inventor
唐朝伟
温浩田
阮帅
黄宝进
冯鑫鑫
刘洪宾
汤东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201911292122.1A priority Critical patent/CN111192237B/en
Publication of CN111192237A publication Critical patent/CN111192237A/en
Application granted granted Critical
Publication of CN111192237B publication Critical patent/CN111192237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20088Trinocular vision calculations; trifocal tensor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30156Vehicle coating
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a glue spreading detection system and method based on deep learning, comprising the following steps of constructing an initial deep residual error network model: the depth residual error network model is of a five-layer structure and comprises an input layer, a convolution layer, a residual error layer, a full connection layer and a joint loss function layer; inputting the training sample into a depth residual error network model for training, thereby obtaining a gluing detection model; acquiring an original gluing image, and preprocessing to obtain a gluing image to be detected; and inputting the gluing image to be detected into a gluing detection model, obtaining the score of the gluing image detection, and judging whether the gluing image to be detected is qualified or not according to the score. According to the invention, the glue coating of the car window in the glue coating workshop is automatically detected by constructing the depth residual error network model, the quality of the glue coating can be rapidly and accurately judged, the detection result is uploaded in real time, the timely treatment is convenient, and the automation degree and the production efficiency of the glue coating workshop are improved.

Description

Deep learning-based glue spreading detection system and method
Technical Field
The invention relates to the technical field of image processing, in particular to a glue spreading detection system and method based on deep learning.
Background
The window glass is an important component of the whole vehicle body, the window structure is usually in a curved surface closed type, and a sealant is used for connecting the window frame of the vehicle body and the window glass. The sealant has sealing and buffering functions so as to prevent the window glass from being damaged when the window frame is deformed due to the stress of the vehicle body.
In the modern industry, as the gluing link of the important component part of the automobile production, the transition from manual gluing to robot gluing is gradually completed due to the characteristics of severe working environment, high working strength and high requirements on movement accuracy and stability, and the automation of gluing is gradually becoming a trend.
The gluing of the window glass of the automobile means that a robot carries a glue gun to coat a circle of complete sealant on the edge position of the glass. In the process of gluing by a robot, due to the influences of factors such as a gluing track error, a glass positioning error, glass size deformation, a clamp clamping error and the like of the robot, a glue breaking phenomenon can occur, the next step of assembling of the window glass is influenced, and glue supplementing treatment is needed.
The traditional detection method for the glue spreading quality of the automobile window comprises manual detection and an image processing detection algorithm. In the process of gluing, quality inspection staff judges whether glue breaking occurs by visual inspection, the glue breaking device has uncertainty, cannot reach the standard of uniform quality in modern production, has low efficiency, and restricts the improvement of the automation level of a production line. The traditional image processing detection algorithm mainly establishes a template based on the shape and color characteristics of the glue, matches and detects the glue to be detected by using the template, has higher illumination requirement, is greatly influenced by external factors, cannot ensure detection efficiency and has poor glue detection effect on different glass shapes.
Along with continuous popularization of domestic industrial production automation and increasing competition in the automobile industry, the traditional glue quality detection technology cannot meet the actual production requirements of enterprises, so that a high-accuracy algorithm needs to be developed to monitor glue coating results so as to improve the production efficiency of automobile enterprises
Disclosure of Invention
Aiming at the problem of low accuracy of glue breaking detection of car window gluing in the prior art, the invention provides a glue breaking detection system and method based on deep learning.
In order to achieve the above object, the present invention provides the following technical solutions:
a glue spreading detection method based on deep learning specifically comprises the following steps:
s1, constructing an initial depth residual error network model:
the depth residual error network model is of a five-layer structure and comprises an input layer, a convolution layer, a residual error layer, a full connection layer and a joint loss function layer;
s2: inputting the training sample into a depth residual error network model for training, thereby obtaining a gluing detection model;
s3: acquiring an original gluing image, and preprocessing to obtain a gluing image to be detected;
s4: and inputting the gluing image to be detected into a gluing detection model, obtaining the score of the gluing image detection, and judging the state of the gluing image to be detected according to the score.
Preferably, in S1, the residual layer includes 4 residual sublayers, wherein,
the first residual sub-layer comprises three groups of residual units, each group of residual units comprises two convolution layers with the size of 3 multiplied by 3, the number of filters is 64, and the step length is 1; the second residual sub-layer comprises four groups of residual units, each group of residual units comprises two convolution layers with the size of 3 multiplied by 3, the number of filters is 128, and the step length is 1; the third residual sub-layer comprises six groups of residual units, each group of residual units comprises two convolution layers with the size of 3 multiplied by 3, the number of filters is 256, and the step length is 1; the fourth residual sub-layer comprises three groups of residual units, and each group of residual units comprises two convolution layers with the size of 3×3, the number of filters of 512 and the step size of 1.
Preferably, the full-connection layer comprises a first full-connection layer and a second full-connection layer, the first full-connection layer is a full-connection layer with 1000 dimensions, and the second full-connection layer is a full-connection layer with 2 dimensions.
Preferably, the joint loss function layer is configured to prevent the over-fitting phenomenon of the depth residual error network model, and the function expression is as follows:
Figure BDA0002319464450000031
in the formula (1), loss represents the joint Loss function output, n represents the number of training samples, m represents the number of neurons in the fully connected layer, i represents the ith sample of the training samples, j represents the jth neuron in the fully connected layer, and x i Representing the i-th training sample of the sample,
Figure BDA0002319464450000036
a label representing the ith training sample, +.>
Figure BDA0002319464450000037
Representing the predicted result of the jth neuron in the fully connected layer, h (j) representing the output value of the jth neuron, e h(j) (x i ) Indicating when the sample is x i The time base is a natural constant, the exponent is a power of h (j), and ++>
Figure BDA0002319464450000038
Indicate->
Figure BDA0002319464450000035
The neurons, λ represents the coefficient of regularization loss, +.>
Figure BDA0002319464450000039
Representing weights in the neural network.
Preferably, in S3, the preprocessing of the original glue image includes ROI clipping and median filtering.
Preferably, in the step S4, the specific step of determining the state of the glue spreading image to be detected is:
s4-1, inputting a gluing image to be detected into a gluing detection model, converting the gluing image to be detected into an image information matrix by an input layer of the gluing detection model, and transmitting the image information matrix to a convolution layer;
s4-2, performing convolution operation on the image information matrix by the convolution layer to extract shallow features of the gluing image to be detected, wherein the shallow features comprise edges, angles and curves of the gluing image;
s4-3, carrying out convolution operation on the shallow features by the residual layer, and finally outputting deep features;
and S4-4, combining deep features by the full-connection layer, so as to obtain a mapping of a gluing loop of a gluing image to be detected in a sample marking space in a gluing detection model, analyzing the mapping, outputting a score of the gluing image detection, and judging that the gluing image is broken if the score is larger than a preset threshold.
Preferably, the convolution layer in S4-2 performs convolution operation on the image information matrix, the convolution step length is set to 2, and the number of convolution filters is set to 64
Figure BDA0002319464450000041
In the formula (2), f 3d Representing the convolution result, i, of a single convolution filter 1 And j 1 Representing pixel point coordinates, m in RGB image 1 Representing the dimension of the image information matrix, G representing the image information matrix, h 1 Represents a weight matrix, k and l represent a weight matrix h 1 Coordinates of the medium weights;
the convolution results of the 64 convolution filters are combined to obtain a first feature (shallow feature) F 64d :F 64d ={f 3d_1 ,f 3d_2 ,f 3d_3 ,…,f 3d_64 },f 3d_3 The convolution result of the 3 rd convolution filter is shown.
Preferably, the residual layer output deep layer feature specifically includes the following steps:
s4-3-1: the first residual sub-layer of the residual layer carries out 64-dimensional convolution operation on the shallow layer characteristics, the convolution step length is set to be 1, the number of convolution filters is set to be 64, and middle layer characteristics are obtained;
the convolution formula for a single convolution filter is as follows:
Figure BDA0002319464450000042
in the formula (3), f' 64d Representing the convolution result, i, of a single convolution filter 2 And j 2 Representing shallow features F 64d Coordinates of medium feature points, m 2 Representing shallow features F 64d Dimension, h 2 Represents a weight matrix, k 2 And l 2 Representing coordinates of weights in the weight matrix; after combining the outputs of the 64 convolution filters, a second feature (middle-layer feature) is obtained
Figure BDA0002319464450000043
Figure BDA0002319464450000044
f′ 64d_3 Representing the convolution result of the 3 rd convolution filter;
s4-3-2: the second residual sub-layer, the third residual sub-layer and the fourth residual sub-layer gradually perform 64-dimensional convolution operation on the middle-layer characteristics, the convolution step length is set to be 1, the number of convolution filters is set to be 512, and deep-layer characteristics are obtained;
the convolution formula for a single convolution filter is:
Figure BDA0002319464450000045
in the formula (4), f' 512d Representing the convolution result, i, of a single convolution filter 3 And j 3 Representing a middle layer feature matrix
Figure BDA0002319464450000051
Coordinates of medium feature points, m 3 Representing the middle layer feature matrix->
Figure BDA0002319464450000052
Dimension, h 3 Represents a weight matrix, k 3 And l 3 Representing coordinates of weights in the weight matrix; combining the outputs of the 512 convolution filters yields deep features: f (F) 512d ={f′ 512d_1 ,f′ 12d_2 ,f′ 512d_3 ,…,f′ 512d_512 },f′ 512d_512 The convolution result of the 512 th convolution filter is shown.
Preferably, the step S4-4 specifically comprises the following steps:
s4-4-1: the deep features are calculated by using a first full-connection layer, wherein the first full-connection layer consists of 1000 neurons, and the calculation output y of each neuron is as follows:
y=w 1 F 512d +b 1 (5)
in the formula (5), F 512d Representing deep features of the gummed image, w 1 A weight matrix representing neurons of a first fully connected layer, b 1 A bias term representing a first fully connected layer; combining the outputs Y of 1000 neurons forms a feature vector Y of length 1000: y= { Y 1 ,y 2 ,y 3 ,…,y 1000 },y 3 An output representing the 3 rd neuron;
s4-4-2: using a second full-connection layer as a classifier, wherein the second full-connection layer is composed of 2 neurons, each neuron in the second full-connection layer carries out classification calculation on the feature vector Y output by the first full-connection layer, and the output result c of the classification calculation is as follows:
c=w 2 Y+b 2 (6)
in the formula (6), w 2 Weight matrix representing neurons of the second fully connected layer, b 2 A bias term representing a second fully connected layer; merging the classification calculation results of the two neurons to obtain a total classification result C:
C={c 1 ,c 2 } (7)
in the formula (7), c 1 Is the output of the first neuron in the second fully connected layer; c 2 Is the output of the second neuron in the second fully connected layer;
substituting the total classification result C into the Softmax function to obtain a probability value P between [0,1 ]:
Figure BDA0002319464450000053
if P is more than 0.5, judging that the glue is broken; if P is less than or equal to 0.5, judging that the glue breaking does not occur in the glue spreading image.
The invention also provides a glue spreading detection system based on deep learning, which comprises the following steps:
the model generating unit is used for constructing and training a depth residual error network model;
the image acquisition unit is used for acquiring data of the gluing image to obtain an original gluing image;
the image preprocessing unit is used for performing ROI clipping and median filtering on the original gluing image so as to obtain a gluing image to be detected;
the image detection unit is used for detecting the gluing image to be detected, outputting a detection result and feeding the detection result back to the server in real time.
In summary, due to the adoption of the technical scheme, compared with the prior art, the invention has at least the following beneficial effects:
according to the invention, a glue coating detection model is constructed, and a ReLU activation function and residual error learning are introduced, so that the problems of gradient elimination and network degradation caused by a deep network are solved, and the accuracy of model detection is improved; in addition, by means of deep residual error learning, factors such as noise points or environmental interference in the extracted gluing image characteristics are removed, and detection accuracy is improved; and the camera is used for collecting the gluing image of the car window in real time, detecting the gluing image and feeding back the detection result in real time, so that timely treatment is convenient, and the automation degree and the production efficiency of a gluing workshop are improved.
Description of the drawings:
fig. 1 is a schematic diagram of a glue spreading detection method based on deep learning according to an exemplary embodiment of the present invention.
Fig. 2 is a schematic diagram of a depth residual network model according to an exemplary embodiment of the present invention.
Fig. 3 is a schematic diagram of a glue application detection system based on deep learning according to an exemplary embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to examples and embodiments. It should not be construed that the scope of the above subject matter of the present invention is limited to the following embodiments, and all techniques realized based on the present invention are within the scope of the present invention.
In the description of the present invention, it should be understood that the terms "longitudinal," "transverse," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate describing the present invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention.
In the invention, in order to detect the gluing quality of a car window gluing workshop in real time, as shown in fig. 1, the following steps are specifically adopted:
step 1: a depth residual network model is defined which is composed of five layers of structures.
In this example, as shown in fig. 2, the first layer of the defined five-layer structure depth residual network model is an input layer 10, the second layer is a convolution layer 11, the third layer is a residual layer 12, the fourth layer is a full connection layer 13, and the fifth layer is a joint loss function layer 14.
In this embodiment, the first layer is the input layer 10 for converting the glue pattern into 224 x 224 image data.
The second layer consists of a convolution layer with a size of 7 x 7, a number of filters of 64, and a step size of 2.
The third layer is a residual layer 12, comprising 4 residual sublayers, the first residual sublayer comprises three groups of residual units, each group of residual units comprises two convolution layers with the size of 3 multiplied by 3, the number of filters is 64, and the step length is 1; the second residual sub-layer comprises four groups of residual units, each group of residual units comprises two convolution layers with the size of 3 multiplied by 3, the number of filters is 128, and the step length is 1; the third residual sub-layer comprises six groups of residual units, each group of residual units comprises two convolution layers with the size of 3 multiplied by 3, the number of filters is 256, and the step length is 1; the fourth residual sub-layer comprises three groups of residual units, and each group of residual units comprises two convolution layers with the size of 3×3, the number of filters of 512 and the step size of 1.
The fourth layer is a full-connection layer 13, comprising a first full-connection layer and a second full-connection layer, wherein the first full-connection layer is a full-connection layer with 1000 dimensions, and the second full-connection layer is a full-connection layer with 2 dimensions.
The fifth layer is a joint loss function layer 14 for modifying the model to prevent overfitting problems.
In the field of computer image processing, the abstract level of a gluing feature extracted from a gluing image becomes higher along with the deepening of the depth of a network, and in theory, the deeper the depth of a neural network is, the more level of the abstract feature can be extracted, however, in an actual application environment, the gradient is usually disappeared to become an obstacle for training the deep network, so that a gluing detection model cannot be converged; and training loss can increase along with deepening of the network, so that the problem of network performance degradation of the gluing detection model occurs, and the gluing detection accuracy is reduced.
In order to solve the problems caused by network deepening, the present invention uses the existing ReLU function as an activation function of the depth residual error network. The traditional activation function is a Sigmoid function, so that the problem of gradient disappearance exists, the ReLU function is stable in output, and the gradient exists all the time when the gluing detection model is trained, so that the problem of Sigmoid gradient disappearance is solved; the ReLU activation function also has sparsity, can unwind complex relations among the features, converts complex gluing features into sparse gluing features, and enhances the robustness of the depth residual error network model. And the complexity of the ReLU compared with that of Sigmoid calculation is lower, so that the calculation speed of the gluing detection model is faster.
In this embodiment, there are many redundant network layers in the deep network used for glue detection, and for the redundant network layers, the output glue feature should be equal to the input glue feature, that is, an identity mapping is implemented. However, in the training process of the depth residual error network model, the redundant network layer learns parameters which are not identical to the mapping, and the characteristics of the gluing image cannot be correctly represented, so that the problem of network degradation is generated.
For example, the input feature of the redundancy layer is x, which represents the extracted glue pattern, and the desired output feature is H (x) =x, which is the original glue pattern feature because it is the redundancy layer. The actual output is characterized by
Figure BDA0002319464450000091
Representing the glue signature of the redundancy layer further extraction. If residual learning is not introduced, it is difficult to obtain since the input feature x is a variable
Figure BDA0002319464450000092
I.e. it is difficult to reconcile the actual output characteristics with the desired output characteristics. If residual learning is introduced, the rubberized feature x is taken as a residual, and is inserted after the output of the redundancy layer and before the ReLU activation function, so that the output feature becomes
Figure BDA0002319464450000093
Only need to make +.>
Figure BDA0002319464450000094
The output characteristics are consistent with the desired output characteristics. While 0 is a constant, during training, < +.>
Figure BDA0002319464450000095
The method can be easily obtained, the identity mapping of the redundant layer is completed, so that the gluing characteristics are not influenced by the redundant layer, the accuracy of a gluing detection model is improved, and the problem of network degradation is solved.
In a very deep network layer, because parameter initialization is generally closer to 0, when parameters of a shallow network are updated in the training process, gradient is easily approaching to 0 along with the deep network, parameters of the shallow layer cannot be updated, a gluing detection model is limited to learn gluing distribution characteristics in more gluing samples, a gradient vanishing problem is formed, and the gluing detection accuracy cannot be further improved.
ResNet networks can solve the gradient vanishing problem well. And n connected residual error learning units are arranged in the ResNet network, and deeper gluing characteristics are extracted sequentially according to the input gluing characteristics. The input glue spreading characteristic in the first residual error learning unit is x 1 Weight is omega 1 Output paste characteristic x 2 Mapped as
Figure BDA0002319464450000096
The second residual error learning unit inputs the gluing characteristic x 2 Weight is omega 2 Output paste characteristic x 3 Mapping to +.>
Figure BDA0002319464450000097
By analogy, the input gluing characteristic in the nth residual error learning unit is x n Weight is omega n Output paste characteristic x n+1 Mapping to +.>
Figure BDA0002319464450000098
In the first residual error learning unit, a gluing characteristic x is output 2 With input glue spreading feature x 1 Is the relation of:
Figure BDA0002319464450000099
in the second residual error learning unit, the gluing characteristic x is output 3 With input glue spreading feature x 1 Is the relation of:
Figure BDA0002319464450000101
and so on, in the nth residual error learning unit, the gluing characteristic x is output n+1 With input glue spreading feature x 1 I represents any one residual learning unit:
Figure BDA0002319464450000102
set output glue spreading feature x n+1 The value after the ReLU activation function is C, and the characteristic x of the C for input gluing is calculated 1 Is a gradient of (2):
Figure BDA0002319464450000103
from the gradient chain derivation of equation (4), it can be seen that no matter
Figure BDA0002319464450000104
Because of the existence of a constant 1 and the continuous multiplication in the traditional chain derivation is changed into a continuous addition state, the phenomenon that the gradient vanishes can not occur when the parameters of each node are updated can be ensured, so that a shallow layer network in a depth residual network model can learn a gluing distribution mode, the problem that the gradient vanishes of gluing image data in a defined depth residual network model is solved, and the gluing characteristics are extracted more accurately.
In this embodiment, as the training process becomes smaller and smaller, the problem of over-fitting will occur, so that the accuracy of the depth residual error network model on the training set is very high, and the detection accuracy of the actual glue spreading image with larger appearance difference in practical application is very low. In order to solve the problem of overfitting of the depth residual error network model, the robustness of the depth residual error network model is improved, and a joint loss function of cross entropy loss and regularization loss is constructed:
Figure BDA0002319464450000105
in the formula (5), loss represents the joint Loss function output, n represents the number of training samples, m represents the number of neurons in the fully connected layer, i represents the ith sample of the training samples, j represents the jth neuron in the fully connected layer, and x i Representing the i-th training sample of the sample,
Figure BDA0002319464450000106
a label representing the ith training sample, +.>
Figure BDA0002319464450000107
Representing the predicted result of the jth neuron in the fully connected layer, h (j) representing the output value of the jth neuron, e h(j) (x i ) Indicating when the sample is x i The time base is a natural constant, the exponent is a power of h (j), and ++>
Figure BDA0002319464450000111
Indicate->
Figure BDA0002319464450000112
The neurons, λ represents the coefficient of regularization loss, +.>
Figure BDA0002319464450000113
Representing weights in the neural network.
Step 2: and inputting the training sample into a defined depth residual error network model for training, thereby generating a gluing detection model.
In this embodiment, after the depth residual network model is well defined, a great amount of training is required to be performed on the depth residual network model to obtain a relatively ideal glue detection model. The diversity of training samples can be increased through data enhancement, the robustness of the model is improved, and overfitting is avoided. In this embodiment, after the training samples are labeled, the training samples are input into the depth residual error network for training, weight matrixes in each convolution layer and the full-connection layer are learned from the training process, and shallow layer features, middle layer features, deep layer features and feature vectors of the glue are gradually extracted, so that a glue-spreading detection model which is finally trained can accurately detect whether glue-breaking occurs in a glue-spreading image.
In this embodiment, the detection standard of the trained glue coating detection model for breaking the glue coating image is: 1. the glue coating of a certain area is completely lost; 2. an insufficient glue coating exists in a certain area: the minimum height is less than 5mm, and the minimum diameter is less than 4mm.
And step 3, acquiring an original gluing image and preprocessing to obtain a gluing image to be detected.
In this embodiment, after the window glass is glued, an industrial camera may be used to take an image, so as to obtain a large number of original glued images (including a complete glued image and a glue-breaking glued image), and then image preprocessing is performed on the original glued images, where the image preprocessing includes ROI cutting and image filtering.
The background object in the original gluing image is complex, and the difficulty of directly processing the whole image to obtain gluing information is high. Meanwhile, the most critical is that the whole image also contains more information in each image, and for the depth residual error network model, the information quantity required to be predicted is more, and the prediction accuracy of the depth residual error network model is also reduced compared with the network with less information quantity, which is unacceptable for a monitoring system with higher accuracy requirement. In order to solve the above problem, ROI cropping of the original image before further image processing is required to obtain a smaller new image. The ROI (Region of Interest) region of interest refers generally to a region of the cut image that needs to be processed later. The method for cutting the ROI in the embodiment is as follows: the window glass is centered and the glue edges on the glass are used as boundaries to designate the ROI cut-out area, and then the original glue image is cut out to obtain a training image of a fixed resolution (e.g., 224 x 224 resolution).
And because the industrial field environment is complex, the glue coating image is often interfered by the outside in the process of acquisition and transmission to generate noise, so that the quality of the glue coating image and the subsequent detection process are affected. Therefore, after the image is cut by the ROI, in order to reduce the influence of noise on the subsequent processing effect, the image needs to be filtered. Since the image noise generated in the industrial field has the characteristic of random high frequency, the image noise can be processed by using a low-pass filter, and the image can be smoother while the low frequency part of the image is enhanced. As one of the low-pass filters, the median filtering can not only play a role of smoothing the image to a certain extent, but also can well protect the edge contour and the image details in the original image, so that the filtering effect is optimal.
And 4, inputting the gluing image to be detected into a trained gluing detection model, and obtaining an output detection result.
In this embodiment, the detection of the glue spreading image specifically includes the following steps:
s4-1: and inputting the gluing image to be detected into a trained gluing detection model, converting the gluing image to be detected into an image information matrix (for example, 3×224×224) which can be processed by a depth residual error network model by an input layer of the gluing detection model, and transmitting the image information matrix to a convolution layer.
S4-2: the convolution layer carries out convolution operation on the image information matrix to extract first characteristics (shallow layer characteristics) of the gluing image, wherein the first characteristics comprise edges, angles, curves and the like of the gluing image, namely gluing loop parts;
for example, the convolution layer performs RGB image convolution operation on one 3×224×224 image information matrix, the convolution step length is set to 2, the number of convolution filters is set to 64, and a 64×112×112 shallow feature is obtained.
The RGB image convolution formula is as follows:
Figure BDA0002319464450000121
/>
f 3d representing the convolution result, i, of a single convolution filter 1 And j 1 Representing pixel point coordinates, m in RGB image 1 Represents the dimension of the RGB image, G represents the RGB image information matrix, h 1 Representing a weight matrix, and k and l represent coordinates of weights in the weight matrix; combining the convolution results of 64 convolution filters to obtain a first feature (shallow feature) F 64d :F 64d ={f 3d_1 ,f 3d_2 ,f 3d_3 ,…,f 3d_64 },f 3d_3 The convolution result of the 3 rd convolution filter is shown.
S4-3: because the first feature (shallow feature) may contain noise points or interference objects, the first feature (shallow feature) needs to be processed, and the noise points or interference objects of the first feature are removed by a first residual sub-layer in the trained depth residual network model, so that a second feature (middle layer feature) is obtained; the second residual sub-layer, the third residual sub-layer and the fourth residual sub-layer sequentially carry out convolution operation on the second feature and finally output a third feature (deep feature);
the method comprises the following specific steps:
s4-3-1: the first residual sublayer performs 64-dimensional convolution operation on the shallow layer features of 64×112×112, the convolution step length is set to 1, the number of convolution filters is set to 64, and the second features (middle layer features) of 64×56×56 are obtained; the 64-dimensional feature matrix convolution formula is as follows:
Figure BDA0002319464450000131
f′ 64d representing the convolution result, i, of a single convolution filter 2 And j 2 Representing shallow features F 64d Coordinates of medium feature points, m 2 Representing shallow features F 64d Dimension, h 2 Representing a weight matrix, k 2 And l 2 Representing the coordinates of the weights in the weight matrix. After combining the outputs of the 64 convolution filters, a second feature (middle-layer feature) is obtained
Figure BDA0002319464450000132
Figure BDA0002319464450000133
Figure BDA0002319464450000134
f′ 64d_3 The convolution result of the 3 rd convolution filter is shown.
The method for removing the noise points in the shallow features comprises the following steps: three sets of residual units in the first residual sublayer use a weight matrix h 2 For shallow layer feature F 64d Performing convolution operation on the weight matrix h 2 The weight of the middle gluing loop part is larger, and the weight of the non-gluing loop part is smaller, so that the non-gluing loop part has the convolution result f' 64d Is approximately zero, thereby filtering out non-rubberized loop portions, and the shallow layer characteristic F 64d The part belonging to the glue loop is extracted from noise or interference.
S4-3-2: adopting a second residual sub-layer, a third residual sub-layer and a fourth residual sub-layer to gradually perform 64-dimensional convolution operation on middle layer characteristics of 64 multiplied by 56 by 13 residual unit cell pairs, setting the convolution step length as 1 and setting the number of convolution filters as 512, and obtaining a third characteristic (deep layer characteristic) of 512 multiplied by 7;
the convolution formula for a single convolution filter is:
Figure BDA0002319464450000141
f′ 512d representing the convolution result, i, of a single convolution filter 3 And j 3 Representing a middle layer feature matrix
Figure BDA0002319464450000142
Coordinates of medium feature points, m 3 Representing the middle layer feature matrix->
Figure BDA0002319464450000143
Dimension, h 3 Representing a weight matrix, k 3 And l 3 Representing the coordinates of the weights in the weight matrix.The convolution operation in this step sums each value in the middle layer feature according to the corresponding weight in the weight matrix, and extracts the deep layer feature from the middle layer feature. Combining the outputs of 512 convolution filters yields a third feature (deep feature): f (F) 512d ={f′ 512d_1 ,f′ 12d_2 ,f′ 512d_3 ,…,f′ 512d_512 },f′ 512d_512 The convolution result of the 512 th convolution filter is shown.
S4-4: because the size of the weight matrix is 3 multiplied by 3, the third characteristic (deep characteristic) is only a local deep characteristic, and the gluing characteristic of the overall gluing image cannot be represented, and therefore, the invention adopts the first full-connection layer to combine according to the gluing distribution mode, and obtains the mapping of a gluing loop of the gluing image to be detected in a sample marking space; and then judging the mapping of the gluing loop of the gluing image to be detected in the sample mark space by the second full-connection layer, so as to output the score of the gluing image detection, judging that the gluing image is broken if the score is larger than a preset threshold value, and judging that the gluing image is not broken if the score is smaller than or equal to the preset threshold value. The preset threshold may be obtained by performing a lot of training on the depth residual network model in step 2, for example, the preset threshold is 0.5.
The specific calculation steps are as follows:
s4-4-1: the third feature (deep feature) of 512×7×7 is calculated using a 1000-dimensional first full-connected layer consisting of 1000 neurons, the calculated output y of each neuron being:
y=w 1 F 512d +b 1 (9)
F 512d representing deep features of the gummed image, w 1 A weight matrix representing neurons of a first fully connected layer, b 1 A bias term representing a first fully connected layer; each neuron will have deep features F 512d According to a weight matrix w 1 Summing then adding the bias term b 1 Obtaining the output of each local feature in deep features on the neuron, and outputting 1000 neuronsY forms a feature vector Y of length 1000: y= { Y 1 ,y 2 ,y 3 ,…,y 1000 },y 3 Representing the output of neuron 3. After the feature vector Y is obtained, the combination of the local deep features is completed, namely the mapping of the gluing loop of the gluing image to be detected in the sample marking space.
S4-4-2: using a 2-dimensional second full-connection layer as a classifier, wherein the second full-connection layer is composed of 2 neurons, each neuron in the second full-connection layer carries out classification calculation on the feature vector Y output by the first full-connection layer, and the output result c of the classification calculation is as follows:
c=w 2 Y+b 2 (10)
w 2 a weight matrix representing neurons of a second fully connected layer, b 2 A bias term representing a second fully connected layer; each neuron of the second fully connected layer bases the respective value in the eigenvector Y on a weight matrix w 2 Summing then adding the bias term b 2 The output on the neuron in the feature vector Y is obtained. Merging the classification calculation results of two neurons to obtain a total classification result C:
C={c 1 ,c 2 } (11)
c 1 is the output of the first neuron in the second fully connected layer; c 2 Is the output of the second neuron in the second fully connected layer.
Because of the two-classification problem, substituting the total classification result C into the Softmax function yields a probability value (score) P between [0,1 ]:
Figure BDA0002319464450000151
if P is more than 0.5, judging that the glue is broken; if P is less than or equal to 0.5, judging that the glue spreading image is not broken, and returning the detection result to the server.
Based on the above technical solution, referring to fig. 3, the present invention further provides a glue spreading detection system based on deep learning, which includes a model generating unit 20, an image collecting unit 21, an image preprocessing unit 22 and an image detecting unit 23.
The model generating unit 20 is configured to construct a depth residual error network model, and train the depth residual error network model to glue the detection model;
the image acquisition unit 21 is used for acquiring images of the glued glass on the gluing assembly line to obtain an original gluing image;
the image preprocessing unit 22 is configured to perform ROI clipping and median filtering on the original glue coated image acquired by the image acquisition unit 21, so as to obtain a glue coated image to be detected; the raw glue image may be median filtered, for example, using a low pass filter.
The image detection unit 23 is loaded with a glue coating detection model trained by the model generation unit 20, and is used for detecting the glue coating image processed by the image preprocessing unit 22, outputting a detection result, and feeding back the detection result (number, time, qualification judgment and the like of the glue coating image) to the server in real time, so that subsequent inquiry and management are facilitated.
According to the invention, the vehicle window gluing quality is automatically detected in real time by using a video monitoring mode, and the detection result is fed back, so that timely treatment is convenient, and the automation degree and the production efficiency of a gluing workshop are improved
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples of carrying out the invention and that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (9)

1. The glue spreading detection method based on deep learning is characterized by comprising the following steps of:
s1, constructing an initial depth residual error network model:
the depth residual error network model is of a five-layer structure and comprises an input layer, a convolution layer, a residual error layer, a full connection layer and a joint loss function layer;
s2: inputting the training sample into a depth residual error network model for training, thereby obtaining a gluing detection model;
s3: acquiring an original gluing image, and preprocessing to obtain a gluing image to be detected;
s4: inputting the gluing image to be detected into a gluing detection model, obtaining the score of the gluing image detection, and judging the state of the gluing image to be detected according to the score;
the specific steps for judging the state of the gluing image to be detected are as follows:
s4-1, inputting a gluing image to be detected into a gluing detection model, converting the gluing image to be detected into an image information matrix by an input layer of the gluing detection model, and transmitting the image information matrix to a convolution layer;
s4-2, performing convolution operation on the image information matrix by the convolution layer to extract shallow features of the gluing image to be detected, wherein the shallow features comprise edges, angles and curves of the gluing image;
s4-3, carrying out convolution operation on the shallow features by the residual layer, and finally outputting deep features;
and S4-4, combining deep features by the full-connection layer, so as to obtain a mapping of a gluing loop of a gluing image to be detected in a sample marking space in a gluing detection model, analyzing the mapping, outputting a score of the gluing image detection, and judging that the gluing image is broken if the score is larger than a preset threshold.
2. A glue spreading detection method based on deep learning according to claim 1, wherein in S1, the residual layer comprises 4 residual sublayers, wherein,
the first residual sub-layer comprises three groups of residual units, each group of residual units comprises two convolution layers with the size of 3 multiplied by 3, the number of filters is 64, and the step length is 1; the second residual sub-layer comprises four groups of residual units, each group of residual units comprises two convolution layers with the size of 3 multiplied by 3, the number of filters is 128, and the step length is 1; the third residual sub-layer comprises six groups of residual units, each group of residual units comprises two convolution layers with the size of 3 multiplied by 3, the number of filters is 256, and the step length is 1; the fourth residual sub-layer comprises three groups of residual units, and each group of residual units comprises two convolution layers with the size of 3×3, the number of filters of 512 and the step size of 1.
3. The glue spreading detection method based on deep learning of claim 1, wherein the fully connected layer comprises a first fully connected layer and a second fully connected layer, the first fully connected layer is a 1000-dimensional fully connected layer, and the second fully connected layer is a 2-dimensional fully connected layer.
4. The glue spreading detection method based on deep learning as defined in claim 1, wherein the joint loss function layer is configured to prevent a deep residual network model from being over-fitted, and the function expression is as follows:
Figure FDA0004138538290000021
in the formula (1), loss represents the joint Loss function output, n represents the number of training samples, m represents the number of neurons in the fully connected layer, i represents the ith sample of the training samples, j represents the jth neuron in the fully connected layer, and x i Representing the i-th training sample of the sample,
Figure FDA0004138538290000022
a label representing the ith training sample, +.>
Figure FDA0004138538290000023
Representing the predicted result of the jth neuron in the fully connected layer, h (j) representing the output value of the jth neuron, e h(j) (x i ) Indicating when the sample is x i The time base is a natural constant, the exponent is a power of h (j), and ++>
Figure FDA0004138538290000024
Indicate->
Figure FDA0004138538290000025
The neurons, λ represents the coefficient of regularization loss, +.>
Figure FDA0004138538290000027
Representing weights in the neural network. />
5. A glue detection method based on deep learning as defined in claim 1, wherein in S3, the preprocessing of the original glue image includes ROI clipping and median filtering.
6. The glue spreading detecting method based on deep learning as defined in claim 1, wherein the convolution layer in S4-2 performs convolution operation on the image information matrix, the convolution step length is set to 2, and the number of convolution filters is set to 64, then
Figure FDA0004138538290000026
In the formula (2), f 3d Representing the convolution result, i, of a single convolution filter 1 And j 1 Representing pixel point coordinates, m in RGB image 1 Representing the dimension of the image information matrix, G representing the image information matrix, h 1 Represents a weight matrix, k and l represent a weight matrix h 1 Coordinates of the medium weights;
and then combining convolution results of 64 convolution filters to obtain shallow layer characteristic F 64d :F 64d ={f 3d_1 ,f 3d_2 ,f 3d_3 ,…,f 3d_64 },f 3d_3 The convolution result of the 3 rd convolution filter is shown.
7. A glue spreading detection method based on deep learning as defined in claim 1, wherein said residual layer output deep feature comprises the steps of:
s4-3-1: the first residual sub-layer of the residual layer carries out 64-dimensional convolution operation on the shallow layer characteristics, the convolution step length is set to be 1, the number of convolution filters is set to be 64, and middle layer characteristics are obtained;
the convolution formula for a single convolution filter is as follows:
Figure FDA0004138538290000031
in the formula (3), f' 64d Representing the convolution result, i, of a single convolution filter 2 And j 2 Representing shallow features F 64d Coordinates of medium feature points, m 2 Representing shallow features F 64d Dimension, h 2 Represents a weight matrix, k 2 And l 2 Representing coordinates of weights in the weight matrix; combining the outputs of 64 convolution filters to obtain a middle layer characteristic
Figure FDA0004138538290000032
Figure FDA0004138538290000033
Figure FDA0004138538290000034
f' 64d_3 Representing the convolution result of the 3 rd convolution filter;
s4-3-2: the second residual sub-layer, the third residual sub-layer and the fourth residual sub-layer gradually perform 64-dimensional convolution operation on the middle-layer characteristics, the convolution step length is set to be 1, the number of convolution filters is set to be 512, and deep-layer characteristics are obtained;
the convolution formula for a single convolution filter is:
Figure FDA0004138538290000035
in the formula (4), f' 512d Representing the convolution result, i, of a single convolution filter 3 And j 3 Representing a middle layer feature matrix
Figure FDA0004138538290000036
Coordinates of medium feature points, m 3 Representing the middle layer feature matrix->
Figure FDA0004138538290000037
Dimension, h 3 Represents a weight matrix, k 3 And l 3 Representing coordinates of weights in the weight matrix; combining the outputs of the 512 convolution filters yields deep features: f (F) 512d ={f' 512d_1 ,f' 12d_2 ,f' 512d_3 ,…,f' 512d_512 },f' 512d_512 The convolution result of the 512 th convolution filter is shown.
8. A glue spreading detection method based on deep learning as defined in claim 1, wherein said S4-4 specifically comprises the steps of:
s4-4-1: the deep features are calculated by using a first full-connection layer, wherein the first full-connection layer consists of 1000 neurons, and the calculation output y of each neuron is as follows:
y=w 1 F 512d +b 1 (5)
in the formula (5), F 512d Representing deep features of the gummed image, w 1 A weight matrix representing neurons of a first fully connected layer, b 1 A bias term representing a first fully connected layer; combining the outputs Y of 1000 neurons forms a feature vector Y of length 1000: y= { Y 1 ,y 2 ,y 3 ,…,y 1000 },y 3 An output representing the 3 rd neuron;
s4-4-2: using a second full-connection layer as a classifier, wherein the second full-connection layer is composed of 2 neurons, each neuron in the second full-connection layer carries out classification calculation on the feature vector Y output by the first full-connection layer, and the output result c of the classification calculation is as follows:
c=w 2 Y+b 2 (6)
in the formula (6), w 2 Weight matrix representing neurons of the second fully connected layer, b 2 A bias term representing a second fully connected layer; classification calculation results of two neuronsMerging to obtain a total classification result C:
C={c 1 ,c 2 } (7)
in the formula (7), c 1 Is the output of the first neuron in the second fully connected layer; c 2 Is the output of the second neuron in the second fully connected layer;
substituting the total classification result C into the Softmax function to obtain a probability value P between [0,1 ]:
Figure FDA0004138538290000041
if P is more than 0.5, judging that the glue is broken; if P is less than or equal to 0.5, judging that the glue breaking does not occur in the glue spreading image.
9. A deep learning based glue application detection system based on the method of any one of claims 1-8, comprising:
the model generating unit is used for constructing and training a depth residual error network model;
the image acquisition unit is used for acquiring data of the gluing image to obtain an original gluing image;
the image preprocessing unit is used for performing ROI clipping and median filtering on the original gluing image so as to obtain a gluing image to be detected;
the image detection unit is used for detecting the gluing image to be detected, outputting a detection result and feeding the detection result back to the server in real time.
CN201911292122.1A 2019-12-16 2019-12-16 Deep learning-based glue spreading detection system and method Active CN111192237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911292122.1A CN111192237B (en) 2019-12-16 2019-12-16 Deep learning-based glue spreading detection system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911292122.1A CN111192237B (en) 2019-12-16 2019-12-16 Deep learning-based glue spreading detection system and method

Publications (2)

Publication Number Publication Date
CN111192237A CN111192237A (en) 2020-05-22
CN111192237B true CN111192237B (en) 2023-05-02

Family

ID=70709219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911292122.1A Active CN111192237B (en) 2019-12-16 2019-12-16 Deep learning-based glue spreading detection system and method

Country Status (1)

Country Link
CN (1) CN111192237B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112025677B (en) * 2020-07-28 2023-09-26 武汉象点科技有限公司 Automatic guiding glue supplementing system and method based on visual detection
CN112150436A (en) * 2020-09-23 2020-12-29 创新奇智(合肥)科技有限公司 Lipstick inner wall gluing detection method and device, electronic equipment and storage medium
CN112381755A (en) * 2020-09-28 2021-02-19 台州学院 Infusion apparatus catheter gluing defect detection method based on deep learning
CN112365446B (en) * 2020-10-19 2022-08-23 杭州亿奥光电有限公司 Paper bag bonding quality detection method
CN112487707B (en) * 2020-11-13 2023-10-17 北京遥测技术研究所 LSTM-based intelligent dispensing pattern generation method
CN112579884B (en) * 2020-11-27 2022-11-04 腾讯科技(深圳)有限公司 User preference estimation method and device
CN112634203B (en) * 2020-12-02 2024-05-31 富联精密电子(郑州)有限公司 Image detection method, electronic device, and computer-readable storage medium
CN112716468A (en) * 2020-12-14 2021-04-30 首都医科大学 Non-contact heart rate measuring method and device based on three-dimensional convolution network
CN112765888B (en) * 2021-01-22 2022-01-28 深圳市鑫路远电子设备有限公司 Vacuum glue supply information processing method and system for accurately metering glue amount
CN112862096A (en) * 2021-02-04 2021-05-28 百果园技术(新加坡)有限公司 Model training and data processing method, device, equipment and medium
CN114120357B (en) * 2021-10-22 2023-04-07 中山大学中山眼科中心 Neural network-based myopia prevention method and device
CN114187270A (en) * 2021-12-13 2022-03-15 苏州清翼光电科技有限公司 Gluing quality detection method and system for mining intrinsic safety type controller based on CCD
CN114328048A (en) * 2021-12-22 2022-04-12 郑州云海信息技术有限公司 Disk fault prediction method and device
CN114494241B (en) * 2022-02-18 2023-05-26 工游记工业科技(深圳)有限公司 Method, device and equipment for detecting rubber path defects
CN114549454A (en) * 2022-02-18 2022-05-27 岳阳珞佳智能科技有限公司 Online monitoring method and system for chip glue-climbing height of production line
CN114494257B (en) * 2022-04-15 2022-09-30 深圳市元硕自动化科技有限公司 Gluing detection method, device, equipment and storage medium
CN117470142B (en) * 2023-12-26 2024-03-15 中国林业科学研究院木材工业研究所 Method for detecting glue applying uniformity of artificial board, control method and device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107240066A (en) * 2017-04-28 2017-10-10 天津大学 Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks
CN108492282A (en) * 2018-03-09 2018-09-04 天津工业大学 Three-dimensional glue spreading based on line-structured light and multitask concatenated convolutional neural network detects
CN108537808A (en) * 2018-04-08 2018-09-14 易思维(天津)科技有限公司 A kind of gluing online test method based on robot teaching point information
CN108629267A (en) * 2018-03-01 2018-10-09 南京航空航天大学 A kind of model recognizing method based on depth residual error network
CN108710826A (en) * 2018-04-13 2018-10-26 燕山大学 A kind of traffic sign deep learning mode identification method
CN108830850A (en) * 2018-06-28 2018-11-16 信利(惠州)智能显示有限公司 Automatic optics inspection picture analyzing method and apparatus
CN108982546A (en) * 2018-08-29 2018-12-11 燕山大学 A kind of intelligent robot gluing quality detecting system and method
CN109635842A (en) * 2018-11-14 2019-04-16 平安科技(深圳)有限公司 A kind of image classification method, device and computer readable storage medium
CN109948691A (en) * 2019-03-14 2019-06-28 齐鲁工业大学 Iamge description generation method and device based on depth residual error network and attention
CN109948647A (en) * 2019-01-24 2019-06-28 西安交通大学 A kind of electrocardiogram classification method and system based on depth residual error network
KR102008973B1 (en) * 2019-01-25 2019-08-08 (주)나스텍이앤씨 Apparatus and Method for Detection defect of sewer pipe based on Deep Learning
CN110197205A (en) * 2019-05-09 2019-09-03 三峡大学 A kind of image-recognizing method of multiple features source residual error network
CN110503638A (en) * 2019-08-15 2019-11-26 上海理工大学 Spiral colloid amount online test method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107240066A (en) * 2017-04-28 2017-10-10 天津大学 Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks
CN108629267A (en) * 2018-03-01 2018-10-09 南京航空航天大学 A kind of model recognizing method based on depth residual error network
CN108492282A (en) * 2018-03-09 2018-09-04 天津工业大学 Three-dimensional glue spreading based on line-structured light and multitask concatenated convolutional neural network detects
CN108537808A (en) * 2018-04-08 2018-09-14 易思维(天津)科技有限公司 A kind of gluing online test method based on robot teaching point information
CN108710826A (en) * 2018-04-13 2018-10-26 燕山大学 A kind of traffic sign deep learning mode identification method
CN108830850A (en) * 2018-06-28 2018-11-16 信利(惠州)智能显示有限公司 Automatic optics inspection picture analyzing method and apparatus
CN108982546A (en) * 2018-08-29 2018-12-11 燕山大学 A kind of intelligent robot gluing quality detecting system and method
CN109635842A (en) * 2018-11-14 2019-04-16 平安科技(深圳)有限公司 A kind of image classification method, device and computer readable storage medium
CN109948647A (en) * 2019-01-24 2019-06-28 西安交通大学 A kind of electrocardiogram classification method and system based on depth residual error network
KR102008973B1 (en) * 2019-01-25 2019-08-08 (주)나스텍이앤씨 Apparatus and Method for Detection defect of sewer pipe based on Deep Learning
CN109948691A (en) * 2019-03-14 2019-06-28 齐鲁工业大学 Iamge description generation method and device based on depth residual error network and attention
CN110197205A (en) * 2019-05-09 2019-09-03 三峡大学 A kind of image-recognizing method of multiple features source residual error network
CN110503638A (en) * 2019-08-15 2019-11-26 上海理工大学 Spiral colloid amount online test method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Deep Feature Pyramid Reconfiguration for Object Detection;Tao Kong等;《Proceedings of the European Conference on Computer Vision (ECCV), 2018》;20180914;169-185 *
Fusing fine-tuned deep features for skin lesion classification;AmirrezaMahbod等;《Computerized Medical Imaging and Graphics》;20181117;第71卷;19-29 *
基于残差密集网络的单幅图像超分辨率重建;谢雪晴;《计算机应用与软件》;20191015;第36卷(第10期);222-226 *
理解过拟合;SIGAI;《https://zhuanlan.zhihu.com/p/38224147》;20191021;1-5 *

Also Published As

Publication number Publication date
CN111192237A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN111192237B (en) Deep learning-based glue spreading detection system and method
CN108491880B (en) Object classification and pose estimation method based on neural network
CN108154118B (en) A kind of target detection system and method based on adaptive combined filter and multistage detection
CN108038846A (en) Transmission line equipment image defect detection method and system based on multilayer convolutional neural networks
CN111340754A (en) Method for detecting and classifying surface defects based on aircraft skin
CN111899172A (en) Vehicle target detection method oriented to remote sensing application scene
CN110032925B (en) Gesture image segmentation and recognition method based on improved capsule network and algorithm
CN111368759B (en) Monocular vision-based mobile robot semantic map construction system
CN113326735B (en) YOLOv 5-based multi-mode small target detection method
CN114937083B (en) Laser SLAM system and method applied to dynamic environment
CN112308921B (en) Combined optimization dynamic SLAM method based on semantics and geometry
CN112489089B (en) Airborne ground moving target identification and tracking method for micro fixed wing unmanned aerial vehicle
CN111028238A (en) Robot vision-based three-dimensional segmentation method and system for complex special-shaped curved surface
CN111199255A (en) Small target detection network model and detection method based on dark net53 network
CN111414875A (en) Three-dimensional point cloud head attitude estimation system based on depth regression forest
CN112396655A (en) Point cloud data-based ship target 6D pose estimation method
Chen et al. Research on fast recognition method of complex sorting images based on deep learning
CN111898566A (en) Attitude estimation method, attitude estimation device, electronic equipment and storage medium
CN113420776B (en) Multi-side joint detection article classification method based on model fusion
CN114820733A (en) Interpretable thermal infrared visible light image registration method and system
CN117788472A (en) Method for judging corrosion degree of rivet on surface of aircraft skin based on DBSCAN algorithm
CN113538342A (en) Convolutional neural network-based quality detection method for coating of aluminum aerosol can
CN110570469B (en) Intelligent identification method for angle position of automobile picture
CN111428555B (en) Joint-divided hand posture estimation method
CN112258402A (en) Dense residual generation countermeasure network capable of rapidly removing rain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Tang Chaowei

Inventor after: Wen Haotian

Inventor after: Ruan Shuai

Inventor after: Huang Baojin

Inventor after: Feng Xinxin

Inventor after: Liu Hongbin

Inventor after: Tang Dong

Inventor before: Tang Chaowei

Inventor before: Wen Haotian

Inventor before: Ruan Shuai

Inventor before: Huang Baojin

Inventor before: Feng Xinxin

Inventor before: Liu Hongbin

Inventor before: Tang Dong

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant