CN111192237A - Glue coating detection system and method based on deep learning - Google Patents

Glue coating detection system and method based on deep learning Download PDF

Info

Publication number
CN111192237A
CN111192237A CN201911292122.1A CN201911292122A CN111192237A CN 111192237 A CN111192237 A CN 111192237A CN 201911292122 A CN201911292122 A CN 201911292122A CN 111192237 A CN111192237 A CN 111192237A
Authority
CN
China
Prior art keywords
layer
gluing
image
convolution
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911292122.1A
Other languages
Chinese (zh)
Other versions
CN111192237B (en
Inventor
唐朝伟
温浩田
阮帅
黄宝进
冯鑫鑫
刘洪滨
汤东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201911292122.1A priority Critical patent/CN111192237B/en
Publication of CN111192237A publication Critical patent/CN111192237A/en
Application granted granted Critical
Publication of CN111192237B publication Critical patent/CN111192237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20088Trinocular vision calculations; trifocal tensor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30156Vehicle coating
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a glue coating detection system and method based on deep learning, which comprises the following steps of constructing an initial deep residual error network model: the depth residual error network model is of a five-layer structure and comprises an input layer, a convolution layer, a residual error layer, a full connection layer and a joint loss function layer; inputting the training sample into a deep residual error network model for training, thereby obtaining a gluing detection model; acquiring an original gluing image, and preprocessing the original gluing image to obtain a gluing image to be detected; and inputting the gluing image to be detected into a gluing detection model, acquiring the score of gluing image detection, and judging whether the gluing image to be detected is qualified or not according to the score. According to the invention, the automatic detection is carried out on the gluing of the car window in the gluing workshop by constructing the depth residual error network model, the gluing quality can be judged quickly and accurately, the detection result is uploaded in real time, the timely treatment is convenient, and the automation degree and the production efficiency of the gluing workshop are improved.

Description

Glue coating detection system and method based on deep learning
Technical Field
The invention relates to the technical field of image processing, in particular to a gluing detection system and method based on deep learning.
Background
The window glass is used as an important component of the whole vehicle body, and the window structure is usually in a curved surface closed type and is connected between a window frame of the vehicle body and the window glass by a sealant. The sealant has the functions of sealing and buffering, so that the window glass cannot be damaged when the window frame is deformed due to the stress of the vehicle body.
In modern industry, as the gluing link which is an important component of automobile production, the conversion from manual gluing to robot gluing is gradually completed due to the characteristics of severe working environment, high working strength and high requirements on motion accuracy and stability, and the automation of gluing gradually becomes a trend.
The automobile window glass gluing means that a robot carries a glue gun to coat a circle of complete sealant on the edge of the glass. In the robot gluing process, due to the influences of factors such as robot gluing track errors, glass positioning errors, glass size deformation and clamp clamping errors, glue breakage may occur, the next assembly of the car window glass is influenced, and glue supplementing treatment is needed.
The traditional detection method for the gluing quality of the automobile window comprises a manual detection algorithm and an image processing detection algorithm. The manual monitoring is that in the gluing process, quality inspection workers judge whether glue breaking occurs or not by visual inspection, uncertainty exists, the quality in modern production can not be met, efficiency is low, and the improvement of the automation level of a production line is restricted. The traditional image processing detection algorithm is mainly used for establishing a template based on the shape and color characteristics of glue coating, matching detection is carried out on the glue coating to be detected by using the template, the requirement on illumination is high, the influence of external factors is large, the detection efficiency cannot be guaranteed, and the glue coating detection effect on different glass shapes is poor.
With the continuous popularization of domestic industrial production automation and the increasing competition of the automobile industry, the traditional gluing quality detection technology cannot meet the actual production requirements of enterprises, so that an algorithm with high accuracy needs to be developed to monitor the gluing result so as to improve the production efficiency of the automobile enterprises
Disclosure of Invention
The invention provides a glue-breaking detection system and method based on deep learning, aiming at the problem of low glue-breaking detection accuracy of car window glue-spreading in the prior art.
In order to achieve the purpose, the invention provides the following technical scheme:
a glue coating detection method based on deep learning specifically comprises the following steps:
s1, constructing an initial depth residual error network model:
the depth residual error network model is of a five-layer structure and comprises an input layer, a convolution layer, a residual error layer, a full connection layer and a joint loss function layer;
s2: inputting the training sample into a deep residual error network model for training, thereby obtaining a gluing detection model;
s3: acquiring an original gluing image, and preprocessing the original gluing image to obtain a gluing image to be detected;
s4: and inputting the gluing image to be detected into the gluing detection model, acquiring the score of gluing image detection, and judging the state of the gluing image to be detected according to the score.
Preferably, in S1, the residual layer includes 4 residual sub-layers, wherein,
the first residual sub-layer comprises three groups of residual units, each group of residual units comprises two convolutional layers with the size of 3 multiplied by 3, the number of filters is 64, and the step length is 1; the second residual sub-layer comprises four groups of residual units, each group of residual units comprises two convolutional layers with the size of 3 multiplied by 3, the number of filters is 128, and the step size is 1; the third residual sub-layer comprises six groups of residual units, each group of residual units comprises two convolution layers with the size of 3 multiplied by 3, the number of filters is 256, and the step length is 1; the fourth residual sublayer includes three groups of residual units, and each group of residual units includes two convolutional layers with a size of 3 × 3, a number of filters of 512, and a step size of 1.
Preferably, the full connection layer comprises a first full connection layer and a second full connection layer, the first full connection layer is a 1000-dimensional full connection layer, and the second full connection layer is a 2-dimensional full connection layer.
Preferably, the joint loss function layer is used for preventing the depth residual error network model from generating an overfitting phenomenon, and the function expression of the joint loss function layer is as follows:
Figure BDA0002319464450000031
in formula (1), Loss represents the joint Loss function output, n represents the number of training samples, m represents the number of neurons in the fully-connected layer, i represents the ith sample of the training samples, j represents the jth neuron in the fully-connected layer, and xiThe (i) th training sample is represented,
Figure BDA0002319464450000036
a label representing the ith training sample,
Figure BDA0002319464450000037
representing the prediction result of the jth neuron in the fully-connected layer, h (j) representing the output value of the jth neuron, eh(j)(xi) When the sample is xiThe time base number is a natural constant and the exponent is the power of h (j),
Figure BDA0002319464450000038
is shown as
Figure BDA0002319464450000035
Individual neurons, λ denotes the coefficient of regularization loss,
Figure BDA0002319464450000039
representing weights in the neural network.
Preferably, in S3, the preprocessing on the original rubber-coated image includes ROI clipping and median filtering.
Preferably, in S4, the specific step of judging the state of the to-be-detected glue-coated image is:
s4-1, inputting the gluing image to be detected into a gluing detection model, converting the gluing image to be detected into an image information matrix by an input layer of the gluing detection model, and transmitting the image information matrix to a convolution layer;
s4-2, performing convolution operation on the image information matrix by the convolution layer to extract shallow features of the rubber coating image to be detected, wherein the shallow features comprise edges, angles and curves of the rubber coating image;
s4-3, carrying out convolution operation on the shallow feature by the residual error layer, and finally outputting the deep feature;
and S4-4, combining the deep features by the full connection layer to obtain the mapping of the gluing loop of the gluing image to be detected in the sample marking space in the gluing detection model, analyzing the mapping, outputting the score of gluing image detection, and judging that the gluing image is broken if the score is larger than a preset threshold value.
Preferably, in S4-2, the convolution layer performs convolution operation on the image information matrix, the convolution step is set to 2, and the number of convolution filters is set to 64, and then
Figure BDA0002319464450000041
In the formula (2), f3dRepresenting the convolution result of a single convolution filter, i1And j1Representing coordinates of pixels m in RGB image1Representing the dimension of the image information matrix, G representing the image information matrix, h1Representing a weight matrix, k and l representing a weight matrix h1Coordinates of medium weight;
then, the convolution results of 64 convolution filters are combined to obtain a first feature (shallow feature) F64d:F64d={f3d_1,f3d_2,f3d_3,…,f3d_64},f3d_3Representing the convolution result of the 3 rd convolution filter.
Preferably, the residual layer output deep layer feature specifically includes the following steps:
s4-3-1: the first residual sub-layer of the residual layer performs 64-dimensional convolution operation on the shallow layer characteristics, the convolution step length is set to be 1, the number of convolution filters is set to be 64, and the middle layer characteristics are obtained;
the convolution formula for a single convolution filter is as follows:
Figure BDA0002319464450000042
in the formula (3), f'64dRepresenting the convolution result of a single convolution filter, i2And j2Representing a shallow feature F64dCoordinates of middle feature points, m2Representing a shallow feature F64dDimension, h2Represents a weight matrix, k2And l2Coordinates representing weights in the weight matrix; combining the outputs of the 64 convolution filters to obtain a second feature (middle layer feature)
Figure BDA0002319464450000043
Figure BDA0002319464450000044
f′64d_3Represents the convolution result of the 3 rd convolution filter;
s4-3-2: the second residual sub-layer, the third residual sub-layer and the fourth residual sub-layer gradually carry out 64-dimensional convolution operation on the middle layer characteristics, the convolution step length is set to be 1, the number of convolution filters is set to be 512, and deep layer characteristics are obtained;
the convolution formula for a single convolution filter is:
Figure BDA0002319464450000045
in the formula (4), f'512dRepresenting a single convolution filterConvolution result, i3And j3Representing the mid-level feature matrix
Figure BDA0002319464450000051
Coordinates of middle feature points, m3Representing the mid-level feature matrix
Figure BDA0002319464450000052
Dimension, h3Represents a weight matrix, k3And l3Coordinates representing weights in the weight matrix; the outputs of the 512 convolution filters are combined to obtain the deep features: f512d={f′512d_1,f′12d_2,f′512d_3,…,f′512d_512},f′512d_512Representing the convolution result of the 512 th convolution filter.
Preferably, the S4-4 specifically includes the following steps:
s4-4-1: the deep features are computed using a first fully-connected layer, consisting of 1000 neurons, the computation output y of each neuron being:
y=w1F512d+b1(5)
in the formula (5), F512dFeatures representing the depth of the glue-applied image, w1Weight matrix representing the first full-link layer neurons, b1A bias term representing a first fully-connected layer; the output Y of 1000 neurons is combined to form a feature vector Y of length 1000: y ═ Y1,y2,y3,…,y1000},y3Represents the output of the 3 rd neuron;
s4-4-2: using a second full-connection layer as a classifier, wherein the second full-connection layer is composed of 2 neurons, each neuron in the second full-connection layer performs classification calculation on the feature vector Y output by the first full-connection layer, and the output result c of the classification calculation is as follows:
c=w2Y+b2(6)
in the formula (6), w2Weight matrix of neurons representing the second fully-connected layer, b2A bias term representing a second fully-connected layer; classification of two neuronsAnd combining the calculation results to obtain a total classification result C:
C={c1,c2} (7)
in the formula (7), c1Is the output of the first neuron in the second fully-connected layer; c. C2Is the output of a second neuron in a second fully-connected layer;
and substituting the total classification result C into a Softmax function to obtain a probability value P between [0 and 1 ]:
Figure BDA0002319464450000053
if P is greater than 0.5, judging that the glue is applied and broken; if P is less than or equal to 0.5, judging that the gluing image has no glue break.
The invention also provides a glue spreading detection system based on deep learning, which comprises:
the model generating unit is used for constructing and training a depth residual error network model;
the image acquisition unit is used for acquiring data of the gluing image to obtain an original gluing image;
the image preprocessing unit is used for performing ROI (region of interest) cutting and median filtering on the original gluing image so as to obtain a gluing image to be detected;
and the image detection unit is used for detecting the gluing image to be detected, outputting a detection result and simultaneously feeding back the detection result to the server in real time.
In summary, due to the adoption of the technical scheme, compared with the prior art, the invention at least has the following beneficial effects:
according to the invention, by constructing a gluing detection model and introducing a ReLU activation function and residual learning, the problems of gradient loss and network degradation caused by a deep network are solved, and the accuracy of model detection is improved; in addition, by means of deep residual learning, factors such as noise points or environmental interference in the extracted gluing image characteristics are removed, and the detection accuracy is improved; and the camera is used for collecting the gluing image of the car window in real time, the gluing detection is carried out, the detection result is fed back in real time, the timely processing is convenient, and the automation degree and the production efficiency of a gluing workshop are improved.
Description of the drawings:
fig. 1 is a schematic diagram of a glue detection method based on deep learning according to an exemplary embodiment of the present invention.
Fig. 2 is a schematic diagram of a depth residual error network model according to an exemplary embodiment of the present invention.
Fig. 3 is a schematic diagram of a glue-spreading detection system based on deep learning according to an exemplary embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and embodiments. It should be understood that the scope of the above-described subject matter is not limited to the following examples, and any techniques implemented based on the disclosure of the present invention are within the scope of the present invention.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.
In order to detect the gluing quality of the car window gluing workshop in real time, the invention specifically adopts the following steps as shown in figure 1:
step 1: a depth residual error network model composed of a five-layer structure is defined.
In this example, as shown in fig. 2, the first layer of the five-layer structure depth residual error network model is defined as an input layer 10, the second layer is a convolutional layer 11, the third layer is a residual error layer 12, the fourth layer is a fully-connected layer 13, and the fifth layer is a joint loss function layer 14.
In this embodiment, the first layer is the input layer 10, which is used to convert the glue image into 224 x 224 image data.
The second layer consists of a convolutional layer of size 7 x 7, with 64 filters, and with a step size of 2.
The third layer is a residual layer 12 which comprises 4 residual sublayers, the first residual sublayer comprises three groups of residual units, each group of residual units comprises two convolutional layers with the size of 3 multiplied by 3, the number of filters is 64, and the step length is 1; the second residual sub-layer comprises four groups of residual units, each group of residual units comprises two convolutional layers with the size of 3 multiplied by 3, the number of filters is 128, and the step size is 1; the third residual sub-layer comprises six groups of residual units, each group of residual units comprises two convolution layers with the size of 3 multiplied by 3, the number of filters is 256, and the step length is 1; the fourth residual sublayer includes three groups of residual units, and each group of residual units includes two convolutional layers with a size of 3 × 3, a number of filters of 512, and a step size of 1.
The fourth layer is full articulamentum 13, including first full articulamentum and the full articulamentum of second, and first full articulamentum is 1000 dimentional full articulamentums, and the full articulamentum of second is 2 dimentional full articulamentum.
And the fifth layer is a joint loss function layer 14 which is used for correcting the model and preventing the overfitting problem.
In the field of computer image processing, the 'abstract level' of the gluing characteristics extracted from a gluing image becomes higher along with the deepening of the network depth, and theoretically, the deeper the depth of a neural network is, the more levels of abstract characteristics can be extracted, however, in an actual application environment, the gradient disappearance often becomes an obstacle for training a deep network, so that a gluing detection model cannot be converged; and the training loss can be increased along with the deepening of the network, so that the problem of network performance degradation of the gluing detection model is caused, and the gluing detection accuracy is reduced.
In order to solve the problems caused by network deepening, the invention uses the existing ReLU function as an activation function of a deep residual error network. The traditional activation function is a Sigmoid function, the problem of gradient disappearance exists, the output of the ReLU function is stable, and the gradient always exists during the training of the gluing detection model, so that the problem of Sigmoid gradient disappearance is solved; the ReLU activation function also has sparsity, can unwind the winding complex relation among the features, converts the complex gluing features into sparse gluing features, and enhances the robustness of the deep residual error network model. And the complexity of the ReLU is lower than that of Sigmoid calculation, so that the calculation speed of the gluing detection model is higher.
In this embodiment, a plurality of redundant network layers exist in the deep network used for the glue detection, and for the redundant network layers, the output glue characteristic should be equal to the input glue characteristic, that is, the identity mapping is realized. However, in the training process of the deep residual error network model, the redundant network layer learns the parameters which are not the identity mapping and can not correctly represent the characteristics of the gluing image, so that the problem of network degradation is generated.
For example, the input feature of the redundant layer is x, which represents the extracted glue feature, and the desired output feature is h (x) x, which is the redundant layer, so that the desired output feature is the original glue image feature. The actual output characteristic is
Figure BDA0002319464450000091
And representing the gluing characteristics further extracted by the redundant layer. If residual learning is not introduced, it is difficult to obtain the input feature x because it is a variable
Figure BDA0002319464450000092
I.e. it is difficult to make the actual output characteristics consistent with the desired output characteristics. If residual error learning is introduced, the gluing characteristic x is taken as a residual error, and is inserted after the redundant layer is output and before the ReLU activation function, so that the output characteristic becomes
Figure BDA0002319464450000093
Only need to make
Figure BDA0002319464450000094
The output characteristic is consistent with the desired output characteristic. And 0 is a constant that, during training,
Figure BDA0002319464450000095
the method can be easily obtained, so that the identity mapping of the redundant layer is completed, the gluing characteristic is not influenced by the redundant layer, the accuracy of a gluing detection model is improved, and the problem of network degradation is solved.
In a very deep network layer, because the parameter initialization is generally closer to 0, when the parameters of a shallow network are updated in the training process, the gradient is easy to approach to 0 along with the depth of the network, the parameters of the shallow network cannot be updated, the gluing distribution characteristics of more gluing samples are limited to be learned by a gluing detection model, the problem of gradient disappearance is formed, and the gluing detection accuracy cannot be further improved.
The ResNet network can well solve the problem of gradient disappearance. And setting n connected residual error learning units in the ResNet network, and sequentially extracting deeper gluing characteristics according to the input gluing characteristics. The input gluing characteristic in the first residual error learning unit is x1Weight of ω1The output glue coating characteristic is x2Is mapped as
Figure BDA0002319464450000096
The input gluing characteristic in the second residual error learning unit is x2Weight of ω2The output glue coating characteristic is x3Is mapped as
Figure BDA0002319464450000097
By analogy, the input gluing characteristic in the nth residual error learning unit is xnWeight of ωnThe output glue coating characteristic is xn+1Is mapped as
Figure BDA0002319464450000098
In the first residual error learning unit, the gluing characteristic x is output2With input coatingGlue characteristic x1The relationship of (1):
Figure BDA0002319464450000099
in the second residual error learning unit, the gluing characteristic x is output3And input the gluing characteristic x1The relationship of (1):
Figure BDA0002319464450000101
by analogy, in the nth residual error learning unit, the gluing characteristic x is outputn+1And input the gluing characteristic x1I represents any residual learning unit:
Figure BDA0002319464450000102
setting output glue coating characteristic xn+1The value after the ReLU activation function is C, and the input gluing characteristic x of the C is solved1Gradient (2):
Figure BDA0002319464450000103
the gradient chain type derivation is carried out on the formula (4) regardless of whether
Figure BDA0002319464450000104
The value of (2) is small, because of the existence of the constant 1, and the continuous multiplication in the traditional chain type derivation is changed into a continuous addition state, the gradient disappearance phenomenon can be ensured not to occur in the updating of each node parameter, so that a shallow layer network in the depth residual error network model can learn the gluing distribution mode, the problem of gradient disappearance of gluing image data in the defined depth residual error network model is solved, and the gluing characteristic is more accurately extracted.
In this embodiment, as the training process becomes smaller and smaller, the overfitting problem will be generated, so that the accuracy of the depth residual error network model on the training set is very high, and the detection accuracy of the real rubber-coated image with a large appearance difference in practical application will be very low. In order to solve the overfitting problem of the depth residual error network model, the robustness of the depth residual error network model is improved, and a combined loss function of cross entropy loss and regularization loss is constructed:
Figure BDA0002319464450000105
in formula (5), Loss represents the joint Loss function output, n represents the number of training samples, m represents the number of neurons in the fully-connected layer, i represents the ith sample of the training samples, j represents the jth neuron in the fully-connected layer, and xiThe (i) th training sample is represented,
Figure BDA0002319464450000106
a label representing the ith training sample,
Figure BDA0002319464450000107
representing the prediction result of the jth neuron in the fully-connected layer, h (j) representing the output value of the jth neuron, eh(j)(xi) When the sample is xiThe time base number is a natural constant and the exponent is the power of h (j),
Figure BDA0002319464450000111
is shown as
Figure BDA0002319464450000112
Individual neurons, λ denotes the coefficient of regularization loss,
Figure BDA0002319464450000113
representing weights in the neural network.
Step 2: and inputting the training sample into a defined depth residual error network model for training, thereby generating a gluing detection model.
In this embodiment, after the deep residual error network model is well defined, a large amount of training is required to obtain an ideal gluing detection model. The diversity of training samples can be increased through data enhancement, the robustness of the model is improved, and overfitting is avoided. In this embodiment, after a training sample is labeled, the training sample is input into a deep residual error network for training, a weight matrix in each convolutional layer and the full-link layer is learned in the training process, and the shallow layer feature, the middle layer feature, the deep layer feature and the feature vector of the rubber coating are gradually extracted, so that the rubber coating detection model after final training can accurately detect whether the rubber coating image has rubber failure.
In this embodiment, the detection standard of the trained glue-spreading detection model for glue-spreading image glue failure is as follows: 1. the glue coating in a certain area is completely lost; 2. insufficient glue was present in a certain area: the minimum height is less than 5mm and the minimum diameter is less than 4 mm.
And 3, acquiring an original gluing image and preprocessing the original gluing image to obtain a gluing image to be detected.
In this embodiment, after the glue coating of the vehicle window glass is completed, an industrial camera may be used to perform image shooting, obtain a large number of original glue coated images (including a complete glue coated image and a glue cut glue coated image), and then perform image preprocessing on the original glue coated images, where the image preprocessing includes ROI clipping and image filtering.
Background objects in the original gluing image are complex, and the difficulty of directly processing the whole image to obtain gluing information is high. Meanwhile, the most important thing is that the whole image is adopted, so that each image contains more information, and for a depth residual error network model, the amount of information to be predicted is more, and the prediction accuracy of the depth residual error network model is reduced compared with a network with less information, which is unacceptable for a monitoring system with higher accuracy requirement. In order to solve the above problem, ROI cropping needs to be performed on the original image before further image processing, and a smaller new image is obtained. The roi (region of interest), which is a region of interest, generally refers to a region of the cropped image that needs to be processed. The method for ROI clipping in the embodiment comprises the following steps: the original glue-coated image is then cropped to obtain a training image of a fixed resolution (e.g., 224 x 224) with the window glass as the center and the glued edge on the glass as the boundary to designate the ROI cropping area.
And because the industrial field environment is complicated, the gluing image is often interfered by the outside world in the collection and transmission process to generate noise, and the quality of the gluing image and the subsequent detection process are influenced. Therefore, after the image is cut by the ROI, in order to reduce the influence of noise on the subsequent processing effect, the image needs to be filtered first. Since the image noise generated in the industrial field has the random high-frequency characteristic, the image noise can be processed by using a low-pass filter, and the low-frequency part of the image is enhanced and the image is smoother. As one of the low-pass filters, the median filter can play a certain role of smoothing the image, and can well protect the edge contour and the image details in the original image, so that the filtering effect is optimal.
And 4, inputting the gluing image to be detected into the trained gluing detection model, and acquiring an output detection result.
In this embodiment, the detection of the glue-coated image specifically includes the following steps:
s4-1: the method comprises the steps of inputting a to-be-detected gluing image into a trained gluing detection model, converting the to-be-detected gluing image into an image information matrix (for example, 3 x 224) which can be processed by a depth residual error network model by an input layer of the gluing detection model, and transmitting the image information matrix to a convolutional layer.
S4-2: the convolution layer performs convolution operation on the image information matrix to extract a first characteristic (shallow characteristic) of the gluing image, wherein the first characteristic comprises an edge, an angle, a curve and the like of the gluing image, namely a gluing loop part;
for example, the convolution layer performs RGB image convolution operation on a 3 × 224 × 224 image information matrix, the convolution step is set to 2, and the number of convolution filters is set to 64, thereby obtaining a shallow feature of 64 × 112 × 112.
The RGB image convolution formula is as follows:
Figure BDA0002319464450000121
f3drepresenting a single convolution filterConvolution result, i1And j1Representing coordinates of pixels m in RGB image1Representing the dimensions of an RGB image, G representing an RGB image information matrix, h1Representing a weight matrix, k and l representing coordinates of weights in the weight matrix; combining the convolution results of 64 convolution filters to obtain a first feature (shallow feature) F64d:F64d={f3d_1,f3d_2,f3d_3,…,f3d_64},f3d_3Representing the convolution result of the 3 rd convolution filter.
S4-3: because the first feature (shallow feature) may contain noise or interferents, it needs to be processed, and the first residual sub-layer in the trained deep residual network model removes the noise or interferents of the first feature, so as to obtain a second feature (middle feature); the second residual sub-layer, the third residual sub-layer and the fourth residual sub-layer sequentially carry out convolution operation on the second features, and finally third features (deep features) are output;
the method comprises the following specific steps:
s4-3-1: the first residual sublayer performs 64-dimensional convolution operation on the shallow layer features of 64 × 112 × 112, the convolution step is set to be 1, the number of convolution filters is set to be 64, and second features (middle layer features) of 64 × 56 × 56 are obtained; the 64-dimensional feature matrix convolution formula is as follows:
Figure BDA0002319464450000131
f′64drepresenting the convolution result of a single convolution filter, i2And j2Indicating shallow feature F64dCoordinates of middle feature points, m2Indicating shallow feature F64dDimension, h2Representing a weight matrix, k2And l2Coordinates representing the weights in the weight matrix. Combining the outputs of the 64 convolution filters to obtain a second feature (middle layer feature)
Figure BDA0002319464450000132
Figure BDA0002319464450000133
Figure BDA0002319464450000134
f′64d_3Representing the convolution result of the 3 rd convolution filter.
The method for removing the noise in the shallow feature comprises the following steps: three sets of residual units in the first residual sub-layer use a weight matrix h2For shallow feature F64dPerforming convolution operation on the weight matrix h2The weight of the middle gluing loop part is larger, and the weight of the non-gluing loop part is smaller, so that the non-gluing loop part is opposite to the convolution result f'64dSo as to filter out the non-gluing loop part and to make the shallow layer characteristic F close to zero64dThe part of the glue loop is extracted from noise or interference.
S4-3-2: a second residual sub-layer, a third residual sub-layer and a fourth residual sub-layer which are 13 residual unit units in total are adopted to carry out 64-dimensional convolution operation on the middle layer characteristics of 64 multiplied by 56 step by step, the convolution step length is set to be 1, the number of convolution filters is set to be 512, and a third characteristic (deep layer characteristic) of 512 multiplied by 7 is obtained;
the convolution formula for a single convolution filter is:
Figure BDA0002319464450000141
f′512drepresenting the convolution result of a single convolution filter, i3And j3Representing a middle layer feature matrix
Figure BDA0002319464450000142
Coordinates of middle feature points, m3Representing a middle layer feature matrix
Figure BDA0002319464450000143
Dimension, h3Representing a weight matrix, k3And l3Coordinates representing the weights in the weight matrix. In the convolution operation in the step, each value in the middle layer characteristic is summed according to the corresponding weight in the weight matrix, and then the deep layer characteristic is extracted from the middle layer characteristicAnd (5) characterizing. The outputs of the 512 convolution filters are combined to obtain a third feature (deep feature): f512d={f′512d_1,f′12d_2,f′512d_3,…,f′512d_512},f′512d_512Representing the convolution result of the 512 th convolution filter.
S4-4: because the size of the weight matrix is 3 multiplied by 3, the third characteristic (deep characteristic) is only a local deep characteristic and cannot represent the gluing characteristic of the overall gluing image, and therefore the first full-connection layers are combined according to the gluing distribution mode to obtain the mapping of the gluing loop of the gluing image to be detected in the sample marking space; and then the second full-connection layer judges the mapping of the gluing loop of the gluing image to be detected in the sample marking space, so as to output the score of the gluing image detection, if the score is larger than a preset threshold value, the glue breaking of the detected gluing image is judged, and if the score is smaller than or equal to the preset threshold value, the glue breaking of the detected gluing image is judged. The preset threshold may be obtained by performing a large amount of training on the depth residual error network model in step 2, for example, the preset threshold is 0.5.
The specific calculation steps are as follows:
s4-4-1: the 512 × 7 × 7 third feature (deep feature) is computed using a 1000-dimensional first fully-connected layer, which consists of 1000 neurons, each with a computation output y of:
y=w1F512d+b1(9)
F512dfeatures representing the depth of the glue-applied image, w1Weight matrix representing the first full-link layer neurons, b1A bias term representing a first fully-connected layer; each neuron will map the deep features F512dAccording to the weight matrix w1Summing and then adding the offset term b1Obtaining the output of each local feature in the deep features on the neuron, and forming a feature vector Y with the length of 1000 from the output Y of 1000 neurons: y ═ Y1,y2,y3,…,y1000},y3Representing the output of the 3 rd neuron. After obtaining the characteristic vector YAnd then the combination of local deep features is completed, namely the mapping of the gluing loop of the gluing image to be detected in the sample marking space.
S4-4-2: using a 2-dimensional second full-connection layer as a classifier, wherein the second full-connection layer is composed of 2 neurons, each neuron in the second full-connection layer performs classification calculation on the feature vector Y output by the first full-connection layer, and the output result c of the classification calculation is as follows:
c=w2Y+b2(10)
w2weight matrix representing neurons of the second fully-connected layer, b2A bias term representing a second fully-connected layer; each neuron of the second fully-connected layer combines the values in the eigenvector Y according to the weight matrix w2Summing and then adding the offset term b2The output on that neuron in the feature vector Y is obtained. And merging the classification calculation results of two neurons to obtain a total classification result C:
C={c1,c2} (11)
c1is the output of the first neuron in the second fully-connected layer; c. C2Is the output of the second neuron in the second fully-connected layer.
Because of the two-classification problem, the total classification result C is substituted into the Softmax function to obtain a probability value (score) P between [0,1 ]:
Figure BDA0002319464450000151
if P is greater than 0.5, judging that the glue is applied and broken; if P is less than or equal to 0.5, judging that the gluing image has no glue break, and simultaneously returning the detection result to the server.
Based on the above technical solution, referring to fig. 3, the present invention further provides a glue-spreading detection system based on deep learning, which includes a model generation unit 20, an image acquisition unit 21, an image preprocessing unit 22, and an image detection unit 23.
The model generating unit 20 is configured to construct a depth residual error network model, and train the depth residual error network model to apply a glue detection model;
the image acquisition unit 21 is used for acquiring an image of the glued glass on the gluing production line to obtain an original glued image;
the image preprocessing unit 22 is configured to perform ROI clipping and median filtering on the original rubber-coated image acquired by the image acquisition unit 21, so as to obtain a rubber-coated image to be detected; the original glue image may be median filtered, for example, using a low pass filter.
The image detection unit 23 is loaded with the glue detection model trained by the model generation unit 20, and is configured to detect the glue-coated image processed by the image preprocessing unit 22, output a detection result, and feed back the detection result (the number, time, qualification judgment, and the like of the glue-coated image) to the server in real time, so as to facilitate subsequent query and management.
The invention automatically detects the gluing quality of the car window in real time by using a video monitoring mode, feeds back the detection result, conveniently processes in time, and improves the automation degree and the production efficiency of a gluing workshop
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. A glue coating detection method based on deep learning is characterized by comprising the following steps:
s1, constructing an initial depth residual error network model:
the depth residual error network model is of a five-layer structure and comprises an input layer, a convolution layer, a residual error layer, a full connection layer and a joint loss function layer;
s2: inputting the training sample into a deep residual error network model for training, thereby obtaining a gluing detection model;
s3: acquiring an original gluing image, and preprocessing the original gluing image to obtain a gluing image to be detected;
s4: and inputting the gluing image to be detected into the gluing detection model, acquiring the score of gluing image detection, and judging the state of the gluing image to be detected according to the score.
2. The glue detection method based on deep learning of claim 1, wherein in S1, the residual layer includes 4 residual sub-layers, wherein,
the first residual sub-layer comprises three groups of residual units, each group of residual units comprises two convolutional layers with the size of 3 multiplied by 3, the number of filters is 64, and the step length is 1; the second residual sub-layer comprises four groups of residual units, each group of residual units comprises two convolutional layers with the size of 3 multiplied by 3, the number of filters is 128, and the step size is 1; the third residual sub-layer comprises six groups of residual units, each group of residual units comprises two convolution layers with the size of 3 multiplied by 3, the number of filters is 256, and the step length is 1; the fourth residual sublayer includes three groups of residual units, and each group of residual units includes two convolutional layers with a size of 3 × 3, a number of filters of 512, and a step size of 1.
3. The glue-spreading detection method based on deep learning of claim 1, wherein the full-connected layer comprises a first full-connected layer and a second full-connected layer, the first full-connected layer is a 1000-dimensional full-connected layer, and the second full-connected layer is a 2-dimensional full-connected layer.
4. The gluing detection method based on deep learning of claim 1, wherein the joint loss function layer is used for preventing the depth residual error network model from generating an overfitting phenomenon, and the function expression of the joint loss function layer is as follows:
Figure FDA0002319464440000021
in formula (1), Loss represents the joint Loss function output, n represents the number of training samples, m represents the number of neurons in the fully-connected layer, i represents the ith sample of the training samples, j represents the jth neuron in the fully-connected layer, and xiThe (i) th training sample is represented,
Figure FDA0002319464440000022
a label representing the ith training sample,
Figure FDA0002319464440000023
representing the prediction result of the jth neuron in the fully-connected layer, h (j) representing the output value of the jth neuron, eh(j)(xi) When the sample is xiThe time base number is a natural constant and the exponent is the power of h (j),
Figure FDA0002319464440000024
is shown as
Figure FDA0002319464440000025
Individual neurons, λ denotes the coefficient of regularization loss, and ω denotes weights in the neural network.
5. The method for detecting glue spreading based on deep learning of claim 1 wherein the preprocessing of the original glue spreading image in S3 includes ROI clipping and median filtering.
6. The glue detection method based on deep learning as claimed in claim 1, wherein in S4, the specific step of determining the state of the glue image to be detected is:
s4-1, inputting the gluing image to be detected into a gluing detection model, converting the gluing image to be detected into an image information matrix by an input layer of the gluing detection model, and transmitting the image information matrix to a convolution layer;
s4-2, performing convolution operation on the image information matrix by the convolution layer to extract shallow features of the rubber coating image to be detected, wherein the shallow features comprise edges, angles and curves of the rubber coating image;
s4-3, carrying out convolution operation on the shallow feature by the residual error layer, and finally outputting the deep feature;
and S4-4, combining the deep features by the full connection layer to obtain the mapping of the gluing loop of the gluing image to be detected in the sample marking space in the gluing detection model, analyzing the mapping, outputting the score of gluing image detection, and judging that the gluing image is broken if the score is larger than a preset threshold value.
7. The glue detection method based on deep learning of claim 6, wherein the convolution layer in S4-2 performs convolution operation on the image information matrix, the convolution step is set to 2, and the number of convolution filters is set to 64, then
Figure FDA0002319464440000026
In the formula (2), f3dRepresenting the convolution result of a single convolution filter, i1And j1Representing coordinates of pixels m in RGB image1Representing the dimension of the image information matrix, G representing the image information matrix, h1Representing a weight matrix, k and l representing a weight matrix h1Coordinates of medium weight;
then, the convolution results of 64 convolution filters are combined to obtain a first feature (shallow feature) F64d:F64d={f3d_1,f3d_2,f3d_3,…,f3d_64},f3d_3Representing the convolution result of the 3 rd convolution filter.
8. The glue detection method based on deep learning of claim 6, wherein the residual layer output deep layer feature specifically includes the following steps:
s4-3-1: the first residual sub-layer of the residual layer performs 64-dimensional convolution operation on the shallow layer characteristics, the convolution step length is set to be 1, the number of convolution filters is set to be 64, and the middle layer characteristics are obtained;
the convolution formula for a single convolution filter is as follows:
Figure FDA0002319464440000031
in the formula (3), f'64dRepresenting the convolution result of a single convolution filter, i2And j2Representing a shallow feature F64dCoordinates of middle feature points, m2Representing a shallow feature F64dDimension, h2Represents a weight matrix, k2And l2Coordinates representing weights in the weight matrix; combining the outputs of the 64 convolution filters to obtain a second feature (middle layer feature)
Figure FDA0002319464440000032
Figure FDA0002319464440000033
f′64d_3Represents the convolution result of the 3 rd convolution filter;
s4-3-2: the second residual sub-layer, the third residual sub-layer and the fourth residual sub-layer gradually carry out 64-dimensional convolution operation on the middle layer characteristics, the convolution step length is set to be 1, the number of convolution filters is set to be 512, and deep layer characteristics are obtained;
the convolution formula for a single convolution filter is:
Figure FDA0002319464440000034
in the formula (4), f'512dRepresenting the convolution result of a single convolution filter, i3And j3Representing the mid-level feature matrix
Figure FDA0002319464440000035
Coordinates of middle feature points, m3Representing the mid-level feature matrix
Figure FDA0002319464440000036
Dimension, h3Represents a weight matrix, k3And l3Coordinates representing weights in the weight matrix; the outputs of the 512 convolution filters are combined to obtain the deep features: f512d={f′512d_1,f′12d_2,f′512d_3,…,f′512d_512},f′512d_512Representing the convolution result of the 512 th convolution filter.
9. The glue-spreading detection method based on deep learning as claimed in claim 6, wherein the S4-4 specifically comprises the following steps:
s4-4-1: the deep features are computed using a first fully-connected layer, consisting of 1000 neurons, the computation output y of each neuron being:
y=w1F512d+b1(5)
in the formula (5), F512dFeatures representing the depth of the glue-applied image, w1Weight matrix representing the first full-link layer neurons, b1A bias term representing a first fully-connected layer; the output Y of 1000 neurons is combined to form a feature vector Y of length 1000: y ═ Y1,y2,y3,…,y1000},y3Represents the output of the 3 rd neuron;
s4-4-2: using a second full-connection layer as a classifier, wherein the second full-connection layer is composed of 2 neurons, each neuron in the second full-connection layer performs classification calculation on the feature vector Y output by the first full-connection layer, and the output result c of the classification calculation is as follows:
c=w2Y+b2(6)
in the formula (6), w2Weight matrix of neurons representing the second fully-connected layer, b2A bias term representing a second fully-connected layer; merging the classification calculation results of the two neurons to obtain a total classification result C:
C={c1,c2} (7)
in the formula (7), c1Is the output of the first neuron in the second fully-connected layer; c. C2Is the output of a second neuron in a second fully-connected layer;
and substituting the total classification result C into a Softmax function to obtain a probability value P between [0 and 1 ]:
Figure FDA0002319464440000041
if P is greater than 0.5, judging that the glue is applied and broken; if P is less than or equal to 0.5, judging that the gluing image has no glue break.
10. A glue-spreading detection system based on deep learning, comprising:
the model generating unit is used for constructing and training a depth residual error network model;
the image acquisition unit is used for acquiring data of the gluing image to obtain an original gluing image;
the image preprocessing unit is used for performing ROI (region of interest) cutting and median filtering on the original gluing image so as to obtain a gluing image to be detected;
and the image detection unit is used for detecting the gluing image to be detected, outputting a detection result and simultaneously feeding back the detection result to the server in real time.
CN201911292122.1A 2019-12-16 2019-12-16 Deep learning-based glue spreading detection system and method Active CN111192237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911292122.1A CN111192237B (en) 2019-12-16 2019-12-16 Deep learning-based glue spreading detection system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911292122.1A CN111192237B (en) 2019-12-16 2019-12-16 Deep learning-based glue spreading detection system and method

Publications (2)

Publication Number Publication Date
CN111192237A true CN111192237A (en) 2020-05-22
CN111192237B CN111192237B (en) 2023-05-02

Family

ID=70709219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911292122.1A Active CN111192237B (en) 2019-12-16 2019-12-16 Deep learning-based glue spreading detection system and method

Country Status (1)

Country Link
CN (1) CN111192237B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112025677A (en) * 2020-07-28 2020-12-04 武汉象点科技有限公司 Automatic guiding glue supplementing system and method based on visual detection
CN112150436A (en) * 2020-09-23 2020-12-29 创新奇智(合肥)科技有限公司 Lipstick inner wall gluing detection method and device, electronic equipment and storage medium
CN112365446A (en) * 2020-10-19 2021-02-12 杭州亿奥光电有限公司 Paper bag bonding quality detection method
CN112381755A (en) * 2020-09-28 2021-02-19 台州学院 Infusion apparatus catheter gluing defect detection method based on deep learning
CN112487707A (en) * 2020-11-13 2021-03-12 北京遥测技术研究所 Intelligent dispensing graph generation method based on LSTM
CN112579884A (en) * 2020-11-27 2021-03-30 腾讯科技(深圳)有限公司 User preference estimation method and device
CN112634203A (en) * 2020-12-02 2021-04-09 富泰华精密电子(郑州)有限公司 Image detection method, electronic device and computer-readable storage medium
CN112716468A (en) * 2020-12-14 2021-04-30 首都医科大学 Non-contact heart rate measuring method and device based on three-dimensional convolution network
CN112765888A (en) * 2021-01-22 2021-05-07 深圳市鑫路远电子设备有限公司 Vacuum glue supply information processing method and system for accurately metering glue amount
CN112862096A (en) * 2021-02-04 2021-05-28 百果园技术(新加坡)有限公司 Model training and data processing method, device, equipment and medium
CN114120357A (en) * 2021-10-22 2022-03-01 中山大学中山眼科中心 Neural network-based myopia prevention method and device
CN114187270A (en) * 2021-12-13 2022-03-15 苏州清翼光电科技有限公司 Gluing quality detection method and system for mining intrinsic safety type controller based on CCD
CN114494257A (en) * 2022-04-15 2022-05-13 深圳市元硕自动化科技有限公司 Gluing detection method, device and equipment and storage medium
CN114494241A (en) * 2022-02-18 2022-05-13 迪赛福工业互联(深圳)有限公司 Method, device and equipment for detecting defects of glue path
CN114549454A (en) * 2022-02-18 2022-05-27 岳阳珞佳智能科技有限公司 Online monitoring method and system for chip glue-climbing height of production line
WO2023116111A1 (en) * 2021-12-22 2023-06-29 郑州云海信息技术有限公司 Disk fault prediction method and apparatus
CN117470142A (en) * 2023-12-26 2024-01-30 中国林业科学研究院木材工业研究所 Method for detecting glue applying uniformity of artificial board, control method and device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107240066A (en) * 2017-04-28 2017-10-10 天津大学 Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks
CN108492282A (en) * 2018-03-09 2018-09-04 天津工业大学 Three-dimensional glue spreading based on line-structured light and multitask concatenated convolutional neural network detects
CN108537808A (en) * 2018-04-08 2018-09-14 易思维(天津)科技有限公司 A kind of gluing online test method based on robot teaching point information
CN108629267A (en) * 2018-03-01 2018-10-09 南京航空航天大学 A kind of model recognizing method based on depth residual error network
CN108710826A (en) * 2018-04-13 2018-10-26 燕山大学 A kind of traffic sign deep learning mode identification method
CN108830850A (en) * 2018-06-28 2018-11-16 信利(惠州)智能显示有限公司 Automatic optics inspection picture analyzing method and apparatus
CN108982546A (en) * 2018-08-29 2018-12-11 燕山大学 A kind of intelligent robot gluing quality detecting system and method
CN109635842A (en) * 2018-11-14 2019-04-16 平安科技(深圳)有限公司 A kind of image classification method, device and computer readable storage medium
CN109948691A (en) * 2019-03-14 2019-06-28 齐鲁工业大学 Iamge description generation method and device based on depth residual error network and attention
CN109948647A (en) * 2019-01-24 2019-06-28 西安交通大学 A kind of electrocardiogram classification method and system based on depth residual error network
KR102008973B1 (en) * 2019-01-25 2019-08-08 (주)나스텍이앤씨 Apparatus and Method for Detection defect of sewer pipe based on Deep Learning
CN110197205A (en) * 2019-05-09 2019-09-03 三峡大学 A kind of image-recognizing method of multiple features source residual error network
CN110503638A (en) * 2019-08-15 2019-11-26 上海理工大学 Spiral colloid amount online test method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107240066A (en) * 2017-04-28 2017-10-10 天津大学 Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks
CN108629267A (en) * 2018-03-01 2018-10-09 南京航空航天大学 A kind of model recognizing method based on depth residual error network
CN108492282A (en) * 2018-03-09 2018-09-04 天津工业大学 Three-dimensional glue spreading based on line-structured light and multitask concatenated convolutional neural network detects
CN108537808A (en) * 2018-04-08 2018-09-14 易思维(天津)科技有限公司 A kind of gluing online test method based on robot teaching point information
CN108710826A (en) * 2018-04-13 2018-10-26 燕山大学 A kind of traffic sign deep learning mode identification method
CN108830850A (en) * 2018-06-28 2018-11-16 信利(惠州)智能显示有限公司 Automatic optics inspection picture analyzing method and apparatus
CN108982546A (en) * 2018-08-29 2018-12-11 燕山大学 A kind of intelligent robot gluing quality detecting system and method
CN109635842A (en) * 2018-11-14 2019-04-16 平安科技(深圳)有限公司 A kind of image classification method, device and computer readable storage medium
CN109948647A (en) * 2019-01-24 2019-06-28 西安交通大学 A kind of electrocardiogram classification method and system based on depth residual error network
KR102008973B1 (en) * 2019-01-25 2019-08-08 (주)나스텍이앤씨 Apparatus and Method for Detection defect of sewer pipe based on Deep Learning
CN109948691A (en) * 2019-03-14 2019-06-28 齐鲁工业大学 Iamge description generation method and device based on depth residual error network and attention
CN110197205A (en) * 2019-05-09 2019-09-03 三峡大学 A kind of image-recognizing method of multiple features source residual error network
CN110503638A (en) * 2019-08-15 2019-11-26 上海理工大学 Spiral colloid amount online test method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
AMIRREZAMAHBOD等: "Fusing fine-tuned deep features for skin lesion classification", 《COMPUTERIZED MEDICAL IMAGING AND GRAPHICS》 *
SIGAI: "理解过拟合", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/38224147》 *
TAO KONG等: "Deep Feature Pyramid Reconfiguration for Object Detection", 《PROCEEDINGS OF THE EUROPEAN CONFERENCE ON COMPUTER VISION (ECCV), 2018》 *
谢雪晴: "基于残差密集网络的单幅图像超分辨率重建", 《计算机应用与软件》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112025677A (en) * 2020-07-28 2020-12-04 武汉象点科技有限公司 Automatic guiding glue supplementing system and method based on visual detection
CN112025677B (en) * 2020-07-28 2023-09-26 武汉象点科技有限公司 Automatic guiding glue supplementing system and method based on visual detection
CN112150436A (en) * 2020-09-23 2020-12-29 创新奇智(合肥)科技有限公司 Lipstick inner wall gluing detection method and device, electronic equipment and storage medium
CN112381755A (en) * 2020-09-28 2021-02-19 台州学院 Infusion apparatus catheter gluing defect detection method based on deep learning
CN112365446A (en) * 2020-10-19 2021-02-12 杭州亿奥光电有限公司 Paper bag bonding quality detection method
CN112487707A (en) * 2020-11-13 2021-03-12 北京遥测技术研究所 Intelligent dispensing graph generation method based on LSTM
CN112487707B (en) * 2020-11-13 2023-10-17 北京遥测技术研究所 LSTM-based intelligent dispensing pattern generation method
CN112579884A (en) * 2020-11-27 2021-03-30 腾讯科技(深圳)有限公司 User preference estimation method and device
CN112634203A (en) * 2020-12-02 2021-04-09 富泰华精密电子(郑州)有限公司 Image detection method, electronic device and computer-readable storage medium
CN112634203B (en) * 2020-12-02 2024-05-31 富联精密电子(郑州)有限公司 Image detection method, electronic device, and computer-readable storage medium
CN112716468A (en) * 2020-12-14 2021-04-30 首都医科大学 Non-contact heart rate measuring method and device based on three-dimensional convolution network
CN112765888A (en) * 2021-01-22 2021-05-07 深圳市鑫路远电子设备有限公司 Vacuum glue supply information processing method and system for accurately metering glue amount
CN112862096A (en) * 2021-02-04 2021-05-28 百果园技术(新加坡)有限公司 Model training and data processing method, device, equipment and medium
CN114120357A (en) * 2021-10-22 2022-03-01 中山大学中山眼科中心 Neural network-based myopia prevention method and device
CN114187270A (en) * 2021-12-13 2022-03-15 苏州清翼光电科技有限公司 Gluing quality detection method and system for mining intrinsic safety type controller based on CCD
WO2023116111A1 (en) * 2021-12-22 2023-06-29 郑州云海信息技术有限公司 Disk fault prediction method and apparatus
CN114549454A (en) * 2022-02-18 2022-05-27 岳阳珞佳智能科技有限公司 Online monitoring method and system for chip glue-climbing height of production line
CN114494241A (en) * 2022-02-18 2022-05-13 迪赛福工业互联(深圳)有限公司 Method, device and equipment for detecting defects of glue path
CN114494257B (en) * 2022-04-15 2022-09-30 深圳市元硕自动化科技有限公司 Gluing detection method, device, equipment and storage medium
CN114494257A (en) * 2022-04-15 2022-05-13 深圳市元硕自动化科技有限公司 Gluing detection method, device and equipment and storage medium
CN117470142A (en) * 2023-12-26 2024-01-30 中国林业科学研究院木材工业研究所 Method for detecting glue applying uniformity of artificial board, control method and device
CN117470142B (en) * 2023-12-26 2024-03-15 中国林业科学研究院木材工业研究所 Method for detecting glue applying uniformity of artificial board, control method and device

Also Published As

Publication number Publication date
CN111192237B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN111192237B (en) Deep learning-based glue spreading detection system and method
CN108491880B (en) Object classification and pose estimation method based on neural network
CN108280856B (en) Unknown object grabbing pose estimation method based on mixed information input network model
CN111899172A (en) Vehicle target detection method oriented to remote sensing application scene
CN108038846A (en) Transmission line equipment image defect detection method and system based on multilayer convolutional neural networks
CN110032925B (en) Gesture image segmentation and recognition method based on improved capsule network and algorithm
CN110245678A (en) A kind of isomery twinned region selection network and the image matching method based on the network
CN110533716B (en) Semantic SLAM system and method based on 3D constraint
CN112837344A (en) Target tracking method for generating twin network based on conditional confrontation
CN110135277B (en) Human behavior recognition method based on convolutional neural network
CN111199255A (en) Small target detection network model and detection method based on dark net53 network
CN111125403A (en) Aided design drawing method and system based on artificial intelligence
Chen et al. Research on fast recognition method of complex sorting images based on deep learning
CN108932471A (en) A kind of vehicle checking method
CN113420776B (en) Multi-side joint detection article classification method based on model fusion
CN117218457B (en) Self-supervision industrial anomaly detection method based on double-layer two-dimensional normalized flow
CN110689557A (en) Improved anti-occlusion target tracking method based on KCF
CN113538342A (en) Convolutional neural network-based quality detection method for coating of aluminum aerosol can
CN110570469B (en) Intelligent identification method for angle position of automobile picture
CN111428555B (en) Joint-divided hand posture estimation method
CN115049842B (en) Method for detecting damage of aircraft skin image and positioning 2D-3D
CN111950476A (en) Deep learning-based automatic river channel ship identification method in complex environment
CN117115616A (en) Real-time low-illumination image target detection method based on convolutional neural network
CN112258402A (en) Dense residual generation countermeasure network capable of rapidly removing rain
CN116664421A (en) Spacecraft image shadow removing method based on multi-illumination angle image fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Tang Chaowei

Inventor after: Wen Haotian

Inventor after: Ruan Shuai

Inventor after: Huang Baojin

Inventor after: Feng Xinxin

Inventor after: Liu Hongbin

Inventor after: Tang Dong

Inventor before: Tang Chaowei

Inventor before: Wen Haotian

Inventor before: Ruan Shuai

Inventor before: Huang Baojin

Inventor before: Feng Xinxin

Inventor before: Liu Hongbin

Inventor before: Tang Dong

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant