CN114418963B - Battery plate defect detection method based on machine vision - Google Patents

Battery plate defect detection method based on machine vision Download PDF

Info

Publication number
CN114418963B
CN114418963B CN202111632245.2A CN202111632245A CN114418963B CN 114418963 B CN114418963 B CN 114418963B CN 202111632245 A CN202111632245 A CN 202111632245A CN 114418963 B CN114418963 B CN 114418963B
Authority
CN
China
Prior art keywords
pictures
layer
machine
extreme learning
groups
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111632245.2A
Other languages
Chinese (zh)
Other versions
CN114418963A (en
Inventor
杨艳
耿涛
王业琴
庄昊
王举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202111632245.2A priority Critical patent/CN114418963B/en
Publication of CN114418963A publication Critical patent/CN114418963A/en
Application granted granted Critical
Publication of CN114418963B publication Critical patent/CN114418963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/048Fuzzy inferencing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks

Abstract

The invention relates to the field of vision detection and discloses a battery polar plate defect detection method based on machine vision, which comprises the steps of firstly carrying out first derivative sharpening treatment on polar plate images to obtain feature extraction pictures; dividing the feature extraction pictures into a plurality of groups, inputting the groups into an extreme learning machine-automatic encoder for encoding, and outputting the encoded pictures; inputting the pictures after multi-channel coding into a full-connection layer, and summarizing the pictures into an atlas after the full-connection layer is activated by using an activation function and inputting the atlas into a fuzzy support vector machine; classifying by using a fuzzy support vector machine, determining classification conditions by a plurality of membership degrees, and dividing the pictures into a plurality of different categories to obtain a plurality of groups of pictures; and respectively uploading a plurality of groups of pictures to a total picture library to form an extreme learning machine-self-coding convolutional neural network, and applying the extreme learning machine-self-coding convolutional neural network to check the training correctness. The invention uses the extreme learning machine-automatic coding convolutional neural network to carry out deep learning, can greatly improve the recognition accuracy and can greatly reduce the polar plate defect problem.

Description

Battery plate defect detection method based on machine vision
Technical Field
The invention belongs to the field of visual detection, and particularly relates to a polar plate defect detection method based on machine vision.
Background
The battery has wide application, the quality of the battery can greatly influence the normal use of various devices, and the quality of an electrode plate in the battery also seriously influences the service life of the battery and the stability of the battery. Therefore, the battery plate needs to be inspected in the production process, the traditional inspection of the battery plate is often manual sampling inspection, and the battery plate cannot be guaranteed to be qualified. According to the invention, the polar plates on the production line are inspected one by utilizing the high-definition camera, so that the quality of the polar plates in the production process of the battery polar plates is ensured.
Disclosure of Invention
The invention aims to: aiming at the problems in the prior art, the invention provides a safe and efficient polar plate detection method, which effectively solves the problems of low efficiency of the current manual spot check and instability of the manual spot check, extracts an image of a photographed polar plate by carrying a high-definition camera, and can effectively solve the problems of polar plate detection by deep learning of an extreme learning machine-self-coding neural network algorithm.
The technical content is as follows: the invention provides a battery plate defect detection method based on machine vision, which comprises the following steps:
step 1: shooting a polar plate in a production line, performing image processing on the shot picture, extracting the characteristics of the picture through first derivative sharpening processing to obtain a characteristic extraction picture, and numbering the polar plate;
step 2: dividing the characteristic extraction pictures into a plurality of groups, inputting the groups into an extreme learning machine-automatic encoder for encoding, and outputting the encoded pictures;
step 3: inputting the pictures after multi-channel coding into a full-connection layer, and summarizing the pictures into an atlas after the full-connection layer is activated by using an activation function and inputting the atlas into a fuzzy support vector machine;
step 4: classifying by using a fuzzy support vector machine, determining classification conditions by a plurality of membership degrees, and dividing the pictures into a plurality of different categories to obtain a plurality of groups of pictures;
step 5: and respectively uploading a plurality of groups of pictures to a total picture library, reserving the pictures to form an extreme learning machine-self-coding convolutional neural network, and applying the extreme learning machine-self-coding convolutional neural network to check the training correctness.
Further, the specific operation of performing the first derivative sharpening processing on the multiple pictures in the step 1 is as follows:
step 1.1: the gradient of the image f at coordinates (x, y) is defined as a two-dimensional vector:
the vector represents the maximum rate of change direction of the pixel point pointing to f as an important geometrical property at the position (x, y);
step 1.2: a gradient pattern is built up, expressed as M (x, y) with the magnitude of v f being the vector:
wherein f is a norm, indicating that the rate of change in the gradient vector direction is in (x, y), the size of M (x, y) is the same as the original image size;
step 1.3: instead of the sum of squares operation, absolute value operations are used:
M(x,y)≈|g x |+|g y |。
further, the structure of the extreme learning machine-automatic encoder in the step 2 is as follows:
the extreme learning machine-automatic encoder consists of a convolution layer and a pooling layer, wherein the convolution layer and the pooling layer consist of an input layer, an output layer and a hidden layer, the number of neurons of the input layer and the output layer is the same, the network layer is of a forward propagation loop-free structure, the input layer to the hidden layer are the encoding process of data, the hidden layer to the output layer are the decoding process of data, and the operation process of the extreme learning machine-automatic encoder is as follows:
wherein x is input, sigma a Sigma is an activation function, W a 、W s Weight, b a 、b s To bias R v 、R u Respectively an input set and an output set, which are convolution operations;
combining an extreme learning machine into a self-encoder, wherein the extreme learning machine has the same structure as the self-encoder, and the hidden layer input weight and bias need to be orthogonalized, namely:
W T W=I,b T b=I
extreme learning machine-self encoder combined hidden layer output H and reconstructed samplesThe relation of (2) is:
the solution method of the output weight beta of the extreme learning machine-self encoder is related to the number of network nodes, and when the number N of the input nodes is different from the number L of hidden layer nodes, the beta calculation formula is as follows:
when the node numbers are the same, the beta calculation formula is: beta=h -1 X。
Further, the full-connection layer in the step 3 includes a weight vector and an activation function, where the weight vector is a proportion of the multipath feature weighted residuals, and the activation function is f (x) =max (0:x).
Further, the specific operation of the fuzzy support vector machine in the step 4 is as follows:
step 4.1: introducing characteristics of samples, universal identifiers and membership degree of each sample, and setting training sets of each sample as follows: { (x) 1 ,y 11 (x 1 )),(x 2 ,y 22 (x 2 )),...,(x n ,y nn (x n ) -x) each training feature represented as x i ∈R n The home class identifier is y i ∈(-1,1),ξ i To support the classification error term of the vector machine objective function, μ (x ii Is a weighted error term;
step 4.2: the objective function of the fuzzy support vector machine is determined as follows:
s.t.y i (ωx i +b)-1+ξ i ≥0,
ξ i ≥0,i=1,2,...,n
wherein ω is a linear piecewise function, C is a penalty factor, and the corresponding discriminant function is:
step 4.3: calculating a membership function according to the distance from the sample to the class center, wherein the membership function is as follows:
wherein,is of class center, x i For the sample, r is the class radius and ε is a small positive number preset.
Further, the fuzzy vector machine selects three membership degrees to determine classification conditions, the pictures are divided into 3 groups of different categories, a plurality of groups of pictures are obtained, and the 3 groups of categories are respectively: defect free images, uncertain images, defective images.
The beneficial effects are that:
1. the invention can effectively reduce the complexity of the convolutional neural network on the picture data processing by utilizing the first derivative sharpening process, the first derivative sharpening process extracts the picture characteristics, reduces the interference of non-characteristic parts, effectively improves the recognition accuracy and reduces the training time.
2. Compared with the traditional self-encoder, the extreme learning machine-self-encoder does not need to adopt a gradient descent method for iterative fine adjustment, greatly shortens the training time, solves the problem of local optimization, ensures that network parameters do not need iterative fine adjustment, takes a least square solution as a global optimal solution, and ensures the generalization capability of an algorithm.
3. The fuzzy support vector machine is used for replacing the traditional Softmax classifier, so that the problems of overfitting, noise interference and the like of the traditional classifier are solved, and the recognition accuracy of the classifier is improved.
4. In the convolutional neural network, the improved activation function f (x) =max (0:x) is introduced into the full-connection layer, so that the pooled data can be effectively activated, the data is binarized, the operation amount can be greatly reduced, and compared with the activation function in the conventional convolutional algorithm, the method is more effective, the learning of the convolutional neural network is also faster, the learning progress can be greatly accelerated, and the programming quantity is reduced.
Drawings
FIG. 1 is a flow chart of a machine vision based polar plate detection method;
FIG. 2 is a flow chart of an extreme learning machine-self-encoding convolutional neural network;
FIG. 3 is a block diagram of an automatic encoder;
FIG. 4 is a block diagram of a multi-channel extreme learning machine self-encoding convolutional neural network.
Detailed Description
The present invention will be further described with reference to the accompanying drawings, and the following examples are only for more clearly illustrating the technical aspects of the present invention, and are not to be construed as limiting the scope of the present invention.
Referring to fig. 1, the invention discloses a method for detecting battery polar plates based on machine vision, which is based on machine vision, image processing and an extreme learning machine self-coding convolutional neural network.
According to the detection method, a high-definition camera is used for shooting an image of a polar plate at a fixed position on a production line, the shot image is numbered, and first derivative sharpening is carried out.
After the power tower and polar plate pictures are obtained, the image is subjected to first derivative sharpening, and the gradient of the image f at coordinates (x, y) is defined as a two-dimensional vector:
the vector represents that the important geometrical property at position (x, y) is that the pixel point points in the direction of maximum rate of change of f.
To facilitate pixel transformation, a gradient pattern is typically created, i.e. expressed as M (x, y) with the magnitude of vector f, where:
wherein f is a norm, and represents a value of a change rate in the gradient vector direction in (x, y), where M (x, y) is the same as the original image in size, so as to ensure that the image is not distorted.
For simplicity of program operand, and fast processing, absolute value operations are often used instead of sum of squares operations:
M(x,y)≈|g x |+|g y |
referring to fig. 2-4, the extreme learning machine-self-coding convolutional neural network is a main part of the present invention, and mainly comprises an extreme learning machine-self encoder, a full connection layer and a fuzzy support vector machine:
the extreme learning machine-automatic encoder is composed of a convolution layer and a pooling layer, the quantity of neurons of an input layer and an output layer is the same, the network layer is of a forward propagation loop-free structure, the input layer to the hidden layer are the coding process of data, the hidden layer to the output layer are the decoding process of data, and the operation process of the extreme learning machine-automatic encoder is as follows:
wherein x is input, sigma a Sigma is an activation function, W a 、W s Weight, b a 、b s To bias R v 、R u Respectively an input set and an output set, which are convolution operations.
Combining an extreme learning machine into a self-encoder, wherein the extreme learning machine has the same structure as the self-encoder, and the hidden layer input weight and bias are required to be orthogonalized
W T W=I,b T b=I
Extreme learning machine-self encoder combined hidden layer output H and reconstructed samplesThe relation of (2) is that
The solution method of the extreme learning machine-self encoder output weight beta is related to the number of network nodes. When the number N of the input nodes is different from the number L of the hidden layer nodes, the beta calculation formula is as follows
When the node numbers are the same, the beta calculation formula is as follows
β=H -1 X
Compared with the traditional self-encoder, the extreme learning machine-self-encoder does not need to adopt a gradient descent method for iterative fine adjustment, and the training time is greatly shortened.
And activates the data at the input layer to the hidden layer and at the hidden layer to the output layer using an activation function σ=max {0, x }.
The full connection layer comprises a weight vector and an activation function, wherein the weight vector determines the proportion of the multipath characteristic weighted residual error, and in order to ensure the training speed of the deep neural network, the traditional activation function f (x) =tanh (x) andinstead f (x) =max (0:x).
Using a fuzzy support vector machine to replace a classification layer, introducing characteristics of samples, a universal identifier and membership degree of each sample when the fuzzy support vector machine classifies the samples, and setting a training set of each sample as
{(x 1 ,y 11 (x 1 )),(x 2 ,y 22 (x 2 )),...,(x n ,y nn (x n ))}
Each training feature is denoted as x i ∈R n The home class identifier is y i ∈(-1,1),ξ i To support the classification error term of the vector machine objective function, μ (x ii For weighted error term, the objective function of the fuzzy support vector machine is
s.t.y i (ωx i +b)-1+ξ i ≥0,
ξ i ≥0,i=1,2,...,n
Wherein ω is a linear piecewise function, C is a penalty factor, and the corresponding discriminant function is:
and calculating a membership function according to the distance from the sample to the class center as follows:
wherein,is of class center, x i For the sample, r is the class radius and ε is a small positive number preset.
In this embodiment, the fuzzy vector machine selects three membership degrees to determine classification conditions, and classifies the pictures into 3 groups of different categories to obtain multiple groups of pictures, wherein the 3 groups of categories are respectively: defect free images, uncertain images, defective images.
The foregoing embodiments are merely illustrative of the technical concept and features of the present invention, and are intended to enable those skilled in the art to understand the present invention and to implement the same, not to limit the scope of the present invention. All equivalent changes or modifications made according to the spirit of the present invention should be included in the scope of the present invention.

Claims (3)

1. The battery plate defect detection method based on machine vision is characterized by comprising the following steps of:
step 1: shooting a polar plate in a production line, performing image processing on the shot picture, extracting the characteristics of the picture through first derivative sharpening processing to obtain a characteristic extraction picture, and numbering the polar plate;
step 2: dividing the characteristic extraction pictures into a plurality of groups, inputting the multiple groups into an extreme learning machine-automatic encoder for encoding, and inputting the encoded pictures;
the extreme learning machine-automatic encoder consists of a convolution layer and a pooling layer, wherein the convolution layer and the pooling layer consist of an input layer, an output layer and a hidden layer, the number of neurons of the input layer and the output layer is the same, the network layer is of a forward propagation loop-free structure, the input layer to the hidden layer are the encoding process of data, the hidden layer to the output layer are the decoding process of data, and the operation process of the extreme learning machine-automatic encoder is as follows:
wherein x is input, sigma a Sigma is an activation function, W a 、W s Weight, b a 、b s To bias R v 、R u Respectively an input set and an output set, which are convolution operations;
combining an extreme learning machine into a self-encoder, wherein the extreme learning machine has the same structure as the self-encoder, and the hidden layer input weight and bias need to be orthogonalized, namely:
W T W=I,b T b=I
extreme learning machine-self encoder combined hidden layer output H and reconstructed samplesThe relation of (2) is:
the solution method of the output weight beta of the extreme learning machine-self encoder is related to the number of network nodes, and when the number N of the input nodes is different from the number L of hidden layer nodes, the beta calculation formula is as follows:
when the node numbers are the same, the beta calculation formula is: beta=h -1 X;
Step 3: inputting the pictures after multi-channel coding into a full-connection layer, and summarizing the pictures into an atlas after the full-connection layer is activated by using an activation function and inputting the atlas into a fuzzy support vector machine;
step 4: classifying by using a fuzzy support vector machine, determining classification conditions by a plurality of membership degrees, and dividing the pictures into a plurality of different categories to obtain a plurality of groups of pictures;
step 4.1: introducing characteristics of a sample, a generic identifier, and eachMembership of each sample, training set of each sample is set as follows: { (x) 1 ,y 11 (x 1 )),(x 2 ,y 22 (x 2 )),…,(x n ,y nn (x n ) -x) each training feature represented as x i ∈R n The home class identifier is y i ∈(-1,1),ξ i To support the classification error term of the vector machine objective function, μ (x ii Is a weighted error term;
step 4.2: the objective function of the fuzzy support vector machine is determined as follows:
s.t.y i (ωx i +b)-1+ξ i ≥0,
ξ i ≥0,i=1,2,...,n
wherein ω is a linear piecewise function, C is a penalty factor, and the corresponding discriminant function is:
step 4.3: calculating a membership function according to the distance from the sample to the class center, wherein the membership function is as follows:
wherein,is of class center, x i The method is characterized in that the method is used for a sample, r is a class radius, and epsilon is a very small positive number preset;
the fuzzy support vector machine selects three membership degrees to determine classification conditions, the pictures are divided into 3 groups of different categories, a plurality of groups of pictures are obtained, and the 3 groups of categories are respectively: a defect-free image, an uncertain image, a defective image;
step 5: and respectively uploading a plurality of groups of pictures to a total picture library, reserving the pictures to form an extreme learning machine-self-coding convolutional neural network, and applying the extreme learning machine-self-coding convolutional neural network to check the training correctness.
2. The machine vision-based battery plate defect detection method according to claim 1, wherein the specific operation of performing the first derivative sharpening process on the multiple pictures in step 1 is as follows:
step 1.1: the gradient of the image f at coordinates (x, y) is defined as a two-dimensional vector:
the vector represents the maximum rate of change direction of the pixel point pointing to f as an important geometrical property at the position (x, y);
step 1.2: a gradient pattern is built up, expressed as M (x, y) with the magnitude of v f being the vector:
wherein f is a norm, indicating that the rate of change in the gradient vector direction is in (x, y), the size of M (x, y) is the same as the original image size;
step 1.3: instead of the sum of squares operation, absolute value operations are used:
M(x,y)≈|g x |+|g y |。
3. the machine vision-based battery plate defect detection method according to claim 1, wherein the full connection layer in the step 3 includes a weight vector and an activation function, the weight vector is a proportion of multi-path feature weighted residuals, and the activation function is f (x) =max (0:x).
CN202111632245.2A 2021-12-28 2021-12-28 Battery plate defect detection method based on machine vision Active CN114418963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111632245.2A CN114418963B (en) 2021-12-28 2021-12-28 Battery plate defect detection method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111632245.2A CN114418963B (en) 2021-12-28 2021-12-28 Battery plate defect detection method based on machine vision

Publications (2)

Publication Number Publication Date
CN114418963A CN114418963A (en) 2022-04-29
CN114418963B true CN114418963B (en) 2023-12-05

Family

ID=81269202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111632245.2A Active CN114418963B (en) 2021-12-28 2021-12-28 Battery plate defect detection method based on machine vision

Country Status (1)

Country Link
CN (1) CN114418963B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116125803A (en) * 2022-12-28 2023-05-16 淮阴工学院 Inverter backstepping fuzzy neural network control strategy based on extreme learning machine

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598861A (en) * 2020-05-13 2020-08-28 河北工业大学 Improved Faster R-CNN model-based non-uniform texture small defect detection method
KR20200103150A (en) * 2019-02-11 2020-09-02 (주) 인텍플러스 Visual inspection system
CN112270659A (en) * 2020-08-31 2021-01-26 中国科学院合肥物质科学研究院 Rapid detection method and system for surface defects of pole piece of power battery
CN113592845A (en) * 2021-08-10 2021-11-02 深圳市华汉伟业科技有限公司 Defect detection method and device for battery coating and storage medium
CN113658176A (en) * 2021-09-07 2021-11-16 重庆科技学院 Ceramic tile surface defect detection method based on interactive attention and convolutional neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109239075B (en) * 2018-08-27 2021-11-30 北京百度网讯科技有限公司 Battery detection method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200103150A (en) * 2019-02-11 2020-09-02 (주) 인텍플러스 Visual inspection system
CN111598861A (en) * 2020-05-13 2020-08-28 河北工业大学 Improved Faster R-CNN model-based non-uniform texture small defect detection method
CN112270659A (en) * 2020-08-31 2021-01-26 中国科学院合肥物质科学研究院 Rapid detection method and system for surface defects of pole piece of power battery
CN113592845A (en) * 2021-08-10 2021-11-02 深圳市华汉伟业科技有限公司 Defect detection method and device for battery coating and storage medium
CN113658176A (en) * 2021-09-07 2021-11-16 重庆科技学院 Ceramic tile surface defect detection method based on interactive attention and convolutional neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Technical data-driven tool condition monitoring challenges for CNC milling: a review;Shi Yuen Wong et.al;《The International Journal of Advanced Manufacturing Technology volume》;第4837-4857页 *
基于机器视觉的锂电池极片缺陷检测与分类系统;王露;《万方学位论文》;第1-64页 *

Also Published As

Publication number Publication date
CN114418963A (en) 2022-04-29

Similar Documents

Publication Publication Date Title
CN110543878B (en) Pointer instrument reading identification method based on neural network
CN103927534B (en) A kind of online visible detection method of coding character based on convolutional neural networks
Rijal et al. Ensemble of deep neural networks for estimating particulate matter from images
CN109272500B (en) Fabric classification method based on adaptive convolutional neural network
CN111833352B (en) Image segmentation method for improving U-net network based on octave convolution
CN110827260B (en) Cloth defect classification method based on LBP characteristics and convolutional neural network
CN109829414B (en) Pedestrian re-identification method based on label uncertainty and human body component model
CN109344856B (en) Offline signature identification method based on multilayer discriminant feature learning
CN110543906B (en) Automatic skin recognition method based on Mask R-CNN model
CN114418963B (en) Battery plate defect detection method based on machine vision
CN111145145B (en) Image surface defect detection method based on MobileNet
CN112164033A (en) Abnormal feature editing-based method for detecting surface defects of counternetwork texture
CN114049305A (en) Distribution line pin defect detection method based on improved ALI and fast-RCNN
CN110555461A (en) scene classification method and system based on multi-structure convolutional neural network feature fusion
CN110348503A (en) A kind of apple quality detection method based on convolutional neural networks
CN111445484B (en) Image-level labeling-based industrial image abnormal area pixel level segmentation method
CN112132257A (en) Neural network model training method based on pyramid pooling and long-term memory structure
Zhou et al. Defect detection method based on knowledge distillation
CN107529647B (en) Cloud picture cloud amount calculation method based on multilayer unsupervised sparse learning network
CN110991247B (en) Electronic component identification method based on deep learning and NCA fusion
CN112634171A (en) Image defogging method based on Bayes convolutional neural network and storage medium
CN115761416A (en) CS-YOLOv5s network-based insulator defect detection method
CN114445726B (en) Sample library establishing method and device based on deep learning
CN114782322A (en) YOLOv5 model arc additive manufacturing molten pool defect detection method
CN116029957A (en) Insulator image pollution identification method based on Markov chain Monte Carlo

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant