CN114418963A - Battery pole plate defect detection method based on machine vision - Google Patents

Battery pole plate defect detection method based on machine vision Download PDF

Info

Publication number
CN114418963A
CN114418963A CN202111632245.2A CN202111632245A CN114418963A CN 114418963 A CN114418963 A CN 114418963A CN 202111632245 A CN202111632245 A CN 202111632245A CN 114418963 A CN114418963 A CN 114418963A
Authority
CN
China
Prior art keywords
pictures
machine
layer
extreme learning
learning machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111632245.2A
Other languages
Chinese (zh)
Other versions
CN114418963B (en
Inventor
杨艳
耿涛
王业琴
庄昊
王举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202111632245.2A priority Critical patent/CN114418963B/en
Publication of CN114418963A publication Critical patent/CN114418963A/en
Application granted granted Critical
Publication of CN114418963B publication Critical patent/CN114418963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/048Fuzzy inferencing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Fuzzy Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the field of visual detection, and discloses a battery plate defect detection method based on machine vision, wherein a plate image is sharpened by a first derivative to obtain a feature extraction picture; dividing the feature extraction pictures into a plurality of groups, inputting the pictures into an extreme learning machine-automatic encoder through multiple channels for encoding, and outputting the encoded pictures; inputting the multichannel coded pictures into a full-connection layer, activating the pictures in the full-connection layer by using an activation function, collecting the pictures into an image set, and inputting the image set into a fuzzy support vector machine; classifying by using a fuzzy support vector machine, determining classification conditions according to a plurality of membership degrees, and dividing the pictures into a plurality of different classes to obtain a plurality of groups of pictures; and uploading the plurality of groups of pictures to a total picture library respectively to form an extreme learning machine-self-coding convolutional neural network, which is applied to checking the training correctness. The invention uses the extreme learning machine-automatic coding convolution neural network to carry out deep learning, can greatly improve the identification accuracy and can greatly reduce the defect problem of the polar plate.

Description

Battery pole plate defect detection method based on machine vision
Technical Field
The invention belongs to the field of visual detection, and particularly relates to a polar plate defect detection method based on machine vision.
Background
The battery is widely applied, the quality of the battery can greatly influence the normal use of various devices, and the quality of an electrode plate in the battery also seriously influences the service life of the battery and the stability of the battery. Therefore, the battery pole plates need to be inspected in the production process, the traditional inspection of the battery pole plates is usually manual sampling inspection, and the battery pole plates cannot be guaranteed to be completely qualified. The invention utilizes the high-definition camera to check the polar plates on the production line one by one, thereby ensuring the quality of the polar plates in the production process of the battery polar plates.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems in the prior art, the invention provides a safe and efficient polar plate detection method, which effectively solves the problems of low efficiency of current manual spot inspection and instability of manual spot inspection, extracts images of the photographed polar plate by carrying a high-definition camera, and effectively solves the problems of polar plate detection by deep learning through an extreme learning machine-self-coding neural network algorithm.
The technical content is as follows: the invention provides a battery pole plate defect detection method based on machine vision, which comprises the following steps:
step 1: shooting a polar plate in a production line, carrying out image processing on the shot picture, extracting the characteristics of the picture through first-order derivative sharpening processing to obtain a characteristic extraction picture, and numbering the polar plate;
step 2: dividing the feature extraction pictures into a plurality of groups, inputting the pictures into an extreme learning machine-automatic encoder through multiple channels for encoding, and outputting the encoded pictures;
and step 3: inputting the multichannel coded pictures into a full-connection layer, activating the pictures in the full-connection layer by using an activation function, collecting the pictures into an image set, and inputting the image set into a fuzzy support vector machine;
and 4, step 4: classifying by using a fuzzy support vector machine, determining classification conditions according to a plurality of membership degrees, and dividing the pictures into a plurality of different classes to obtain a plurality of groups of pictures;
and 5: and respectively uploading the multiple groups of pictures to a total picture library, reserving the pictures, forming an extreme learning machine-self-coding convolutional neural network, and applying the extreme learning machine-self-coding convolutional neural network to check the training correctness.
Further, the specific operation of performing the first derivative sharpening on the multiple pictures in step 1 is as follows:
step 1.1: the gradient of the image f at coordinates (x, y) is defined as a two-dimensional vector:
Figure BDA0003440585480000021
the vector represents that the important geometric property at the position (x, y) is the direction of the maximum rate of change of the pointing direction f of the pixel point;
step 1.2: a gradient pattern is established, denoted as M (x, y) with a magnitude of vector ∑ f:
Figure BDA0003440585480000022
wherein, | f | | is a norm, which represents the value of the change rate in the gradient vector direction at (x, y), and the size of M (x, y) is the same as the size of the original image;
step 1.3: the sum of squares operation is replaced with an absolute value operation:
M(x,y)≈|gx|+|gy|。
further, the structure of the limit learning machine-automatic encoder in the step 2 is as follows:
the extreme learning machine-automatic encoder is composed of a convolution layer and a pooling layer, wherein the convolution layer and the pooling layer are composed of an input layer, an output layer and a hidden layer, neurons of the input layer and the output layer are the same in number, a network layer is a forward propagation loop-free structure, the input layer to the hidden layer are used for encoding data, the hidden layer to the output layer are used for decoding the data, and the operation process of the extreme learning machine-automatic encoder is satisfied as follows:
Figure BDA0003440585480000023
where x is the input, σaσ is the activation function, Wa、WsIs a weight, ba、bsTo bias, Rv、RuRespectively an input set and an output set, and performing convolution operation;
combining an extreme learning machine into a self-encoder, wherein the extreme learning machine and the self-encoder have the same structure, and the hidden layer input weight and the bias need to be orthogonalized, namely:
WTW=I,bTb=I
extreme learning machine-autoencoder combined hidden layer output H and reconstructed samples
Figure BDA0003440585480000024
The relationship of (1) is:
Figure BDA0003440585480000025
the solving method of the output weight beta of the extreme learning machine-self-encoder is related to the number of network nodes, and when the number of input nodes N is different from the number of nodes L of the hidden layer, the calculation formula of beta is as follows:
Figure BDA0003440585480000031
when the number of nodes is the same, the calculation formula of beta is as follows: beta is H-1X。
Further, the full-link layer in step 3 includes a weight vector, which is a proportion of the multipath feature-weighted residuals, and an activation function, which is f (x) max (0: x).
Further, the specific operation of the fuzzy support vector machine in the step 4 is as follows:
step 4.1: introducing the characteristics of the samples, the universal identifiers and the membership degree of each sample, and setting the training set of each sample as follows: { (x)1,y11(x1)),(x2,y22(x2)),...,(xn,ynn(xn) Etc.), where each training feature is denoted xi∈RnThe home class identifier is yi∈(-1,1),ξiThe classification error term, μ (x), for the support vector machine objective functioniiIs a weighted error term;
step 4.2: determining an objective function of the fuzzy support vector machine as follows:
Figure BDA0003440585480000032
s.t.yi(ωxi+b)-1+ξi≥0,
ξi≥0,i=1,2,...,n
wherein, ω is a linear piecewise function, and C is a penalty factor, then the corresponding discriminant function is:
Figure BDA0003440585480000033
step 4.3: calculating a membership function according to the distance from the sample to the class center as follows:
Figure BDA0003440585480000034
wherein the content of the first and second substances,
Figure BDA0003440585480000035
is class center, xiFor the sample, r is the radius of the class and ε is a very small positive preset.
Further, the fuzzy vector machine selects three membership degrees to determine the classification condition, divides the pictures into 3 groups of different classes, and obtains a plurality of groups of pictures, wherein the 3 groups of classes are respectively: no defect image, indeterminate image, defective image.
Has the advantages that:
1. according to the method, the complexity of the convolutional neural network on the picture data processing can be effectively reduced by utilizing the first derivative sharpening processing, the picture features are extracted by utilizing the first derivative sharpening processing, the interference of non-feature parts is reduced, the identification accuracy is effectively improved, and the training time is reduced.
2. Compared with the traditional self-encoder, the extreme learning machine-self-encoder does not need to adopt a gradient descent method for iterative fine adjustment, greatly shortens the training time, overcomes the problem of local optimum, ensures that network parameters do not need iterative fine adjustment, takes a least square solution as a global optimum solution, and ensures the generalization capability of the algorithm.
3. The fuzzy support vector machine is used for replacing the traditional Softmax classifier, so that the problems of overfitting, noise interference and the like of the traditional classifier are solved, and the identification precision of the classifier is improved.
4. In the convolutional neural network, an improved activation function f (x) ═ max (0: x) is introduced into a full connection layer, the pooled data can be effectively activated, the data is binarized, the operation amount can be greatly reduced, and compared with the activation function in the conventional convolutional algorithm, the convolutional neural network learning method is more effective and faster, the learning progress can be greatly accelerated, and the programming quantity is reduced.
Drawings
FIG. 1 is a flow chart of a machine vision based pole plate inspection method;
FIG. 2 is a flow chart of an extreme learning machine-self-encoding convolutional neural network;
FIG. 3 is a block diagram of an automatic encoder;
FIG. 4 is a diagram of a self-coding convolutional neural network structure of a multi-channel extreme learning machine.
Detailed Description
The present invention is further described with reference to the accompanying drawings, and the following examples are only for clearly illustrating the technical solutions of the present invention, and should not be taken as limiting the scope of the present invention.
Referring to fig. 1, the invention discloses a method for battery plate detection based on machine vision, which is based on machine vision, image processing and extreme learning machine self-coding convolution neural network.
The detection method comprises the steps of shooting a polar plate image at a fixed position on a production line by a high-definition camera, numbering the shot image, and carrying out first-order derivative sharpening treatment.
After obtaining the pictures of the power tower and the pole plate, carrying out first derivative sharpening on the image, and defining the gradient of the image f at the coordinate (x, y) as a two-dimensional vector:
Figure BDA0003440585480000051
the vector represents the important geometric property at location (x, y) as the direction of maximum rate of change of the point pointing to f.
To facilitate pixel transformation, a gradient pattern is typically established, i.e., the magnitude of vector ∑ f is denoted as M (x, y), where:
Figure BDA0003440585480000052
wherein, | f | | is a norm, which represents the value of the change rate in the gradient vector direction at (x, y), and the size of M (x, y) is the same as the size of the original image, so as to ensure that the image is not distorted.
For simplicity of procedural computation, and fast processing, absolute value operations are often used instead of sum of squares operations:
M(x,y)≈|gx|+|gy|
referring to fig. 2-4, the extreme learning machine-self-coding convolutional neural network is a main part of the present invention, and mainly comprises an extreme learning machine-self-coder, a full connection layer and a fuzzy support vector machine:
the extreme learning machine-automatic encoder is composed of a convolution layer and a pooling layer, the number of neurons of an input layer and the output layer of the extreme learning machine-automatic encoder is the same, a network layer is a forward propagation loop-free structure, the encoding process of data is from the input layer to a hidden layer, the decoding process of data is from the hidden layer to the output layer, and the operation process meets the following requirements:
Figure BDA0003440585480000053
where x is the input, σaσ is the activation function, Wa、WsIs a weight, ba、bsTo bias, Rv、RuInput set and output set, respectively, are convolution operations.
The extreme learning machine is combined into the self-encoder, the extreme learning machine and the self-encoder have the same structure, and the hidden layer input weight and the bias need to be orthogonalized, namely
WTW=I,bTb=I
Extreme learning machine-autoencoder combined hidden layer output H and reconstructed samples
Figure BDA0003440585480000061
In a relationship of
Figure BDA0003440585480000062
The solving method of the output weight beta of the extreme learning machine-self-encoder is related to the number of network nodes. When the number of input nodes N is different from the number of nodes L of the hidden layer, the calculation formula of beta is as follows
Figure BDA0003440585480000063
When the number of nodes is the same, the formula of beta is as follows
β=H-1X
Compared with the traditional self-encoder, the extreme learning machine-self-encoder does not need to adopt a gradient descent method for iterative fine adjustment, and the training time is greatly shortened.
And activates data using an activation function σ ═ max {0, x } at the input layer to the hidden layer, and at the hidden layer to the output layer.
The full-connection layer comprises a weight vector and an activation function, the weight vector determines the proportion of the multipath feature weighted residual, and in order to ensure the training speed of the deep neural network, the traditional activation functions f (x) ═ tanh (x) and
Figure BDA0003440585480000064
instead, f (x) max (0: x).
Using a fuzzy support vector machine to replace a classification layer, introducing the characteristics, the universal identifiers and the membership degree of each sample when the fuzzy support vector machine performs classification, and setting the training set of each sample as
{(x1,y11(x1)),(x2,y22(x2)),...,(xn,ynn(xn))}
Each training feature is represented as xi∈RnThe home class identifier is yi∈(-1,1),ξiThe classification error term, μ (x), for the support vector machine objective functioniiFor the weighted error term, the objective function of the fuzzy support vector machine is
Figure BDA0003440585480000071
s.t.yi(ωxi+b)-1+ξi≥0,
ξi≥0,i=1,2,...,n
Wherein, ω is a linear piecewise function, and C is a penalty factor, then the corresponding discriminant function is:
Figure BDA0003440585480000072
and calculating a membership function according to the distance from the sample to the class center as follows:
Figure BDA0003440585480000073
wherein the content of the first and second substances,
Figure BDA0003440585480000074
is class center, xiFor the sample, r is the radius of the class and ε is a very small positive preset.
In this embodiment, the fuzzy vector machine selects three membership degrees to determine the classification condition, and divides the pictures into 3 different groups to obtain a plurality of groups of pictures, wherein the 3 groups of classes are: no defect image, indeterminate image, defective image.
The above embodiments are merely illustrative of the technical concepts and features of the present invention, and the purpose of the embodiments is to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All equivalent changes and modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.

Claims (6)

1. A battery pole plate defect detection method based on machine vision is characterized by comprising the following steps:
step 1: shooting a polar plate in a production line, carrying out image processing on the shot picture, extracting the characteristics of the picture through first-order derivative sharpening processing to obtain a characteristic extraction picture, and numbering the polar plate;
step 2: dividing the feature extraction pictures into a plurality of groups, inputting the pictures into an extreme learning machine-automatic encoder through multiple channels for encoding, and inputting the encoded pictures;
and step 3: inputting the multichannel coded pictures into a full-connection layer, activating the pictures in the full-connection layer by using an activation function, collecting the pictures into an image set, and inputting the image set into a fuzzy support vector machine;
and 4, step 4: classifying by using a fuzzy support vector machine, determining classification conditions according to a plurality of membership degrees, and dividing the pictures into a plurality of different classes to obtain a plurality of groups of pictures;
and 5: and respectively uploading the multiple groups of pictures to a total picture library, reserving the pictures, forming an extreme learning machine-self-coding convolutional neural network, and applying the extreme learning machine-self-coding convolutional neural network to check the training correctness.
2. The method for detecting defects of battery plates based on machine vision according to claim 1, wherein the operations of sharpening the first derivative of the multiple pictures in step 1 are as follows:
step 1.1: the gradient of the image f at coordinates (x, y) is defined as a two-dimensional vector:
Figure FDA0003440585470000011
the vector represents that the important geometric property at the position (x, y) is the direction of the maximum rate of change of the pointing direction f of the pixel point;
step 1.2: establishing a gradient pattern using vectors
Figure FDA0003440585470000012
Is expressed as M (x, y):
Figure FDA0003440585470000013
wherein, | f | | is a norm, which represents the value of the change rate in the gradient vector direction at (x, y), and the size of M (x, y) is the same as the size of the original image;
step 1.3: the sum of squares operation is replaced with an absolute value operation:
M(x,y)≈|gx|+|gy|。
3. the battery plate defect detection method based on machine vision as claimed in claim 1, wherein the structure of the limit learning machine-automatic encoder in step 2 is:
the extreme learning machine-automatic encoder is composed of a convolution layer and a pooling layer, wherein the convolution layer and the pooling layer are composed of an input layer, an output layer and a hidden layer, neurons of the input layer and the output layer are the same in number, a network layer is a forward propagation loop-free structure, the input layer to the hidden layer are used for encoding data, the hidden layer to the output layer are used for decoding the data, and the operation process of the extreme learning machine-automatic encoder is satisfied as follows:
Figure FDA0003440585470000021
where x is the input, σaσ is the activation function, Wa、WsIs a weight, ba、bsTo bias, Rv、RuRespectively an input set and an output set, and performing convolution operation;
combining an extreme learning machine into a self-encoder, wherein the extreme learning machine and the self-encoder have the same structure, and the hidden layer input weight and the bias need to be orthogonalized, namely:
WTW=I,bTb=I
extreme learning machine-autoencoder combined hidden layer output H and reconstructed samples
Figure FDA0003440585470000022
The relationship of (1) is:
Figure FDA0003440585470000023
the solving method of the output weight beta of the extreme learning machine-self-encoder is related to the number of network nodes, and when the number of input nodes N is different from the number of nodes L of the hidden layer, the calculation formula of beta is as follows:
Figure FDA0003440585470000024
when the number of nodes is the same, the calculation formula of beta is as follows: beta is H-1X。
4. The machine-vision-based battery plate defect detection method of claim 3, wherein the full-link layer in step 3 comprises a weight vector and an activation function, wherein the weight vector is a proportion of the multipath feature weighted residual error, and the activation function is f (x) -max (0: x).
5. The method for detecting battery plate defects based on machine vision according to claim 1, wherein the fuzzy support vector machine in the step 4 specifically operates as follows:
step 4.1: introducing the characteristics of the samples, the universal identifiers and the membership degree of each sample, and setting the training set of each sample as follows: { (x)1,y11(x1)),(x2,y22(x2)),...,(xn,ynn(xn) Etc.), where each training feature is denoted xi∈RnThe home class identifier is yi∈(-1,1),ξiThe classification error term, μ (x), for the support vector machine objective functioniiIs a weighted error term;
step 4.2: determining an objective function of the fuzzy support vector machine as follows:
Figure FDA0003440585470000031
s.t.yi(ωxi+b)-1+ξi≥0,
ξi≥0,i=1,2,...,n
wherein, ω is a linear piecewise function, and C is a penalty factor, then the corresponding discriminant function is:
Figure FDA0003440585470000032
step 4.3: calculating a membership function according to the distance from the sample to the class center as follows:
Figure FDA0003440585470000033
wherein the content of the first and second substances,
Figure FDA0003440585470000034
is class center, xiFor the sample, r is the radius of the class and ε is a very small positive preset.
6. The machine vision-based battery plate defect detection method of claim 5, wherein the fuzzy vector machine selects three membership degrees to determine classification conditions, divides pictures into 3 different groups, obtains a plurality of groups of pictures, and the 3 groups of the classes are respectively: no defect image, indeterminate image, defective image.
CN202111632245.2A 2021-12-28 2021-12-28 Battery plate defect detection method based on machine vision Active CN114418963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111632245.2A CN114418963B (en) 2021-12-28 2021-12-28 Battery plate defect detection method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111632245.2A CN114418963B (en) 2021-12-28 2021-12-28 Battery plate defect detection method based on machine vision

Publications (2)

Publication Number Publication Date
CN114418963A true CN114418963A (en) 2022-04-29
CN114418963B CN114418963B (en) 2023-12-05

Family

ID=81269202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111632245.2A Active CN114418963B (en) 2021-12-28 2021-12-28 Battery plate defect detection method based on machine vision

Country Status (1)

Country Link
CN (1) CN114418963B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116125803A (en) * 2022-12-28 2023-05-16 淮阴工学院 Inverter backstepping fuzzy neural network control strategy based on extreme learning machine

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598861A (en) * 2020-05-13 2020-08-28 河北工业大学 Improved Faster R-CNN model-based non-uniform texture small defect detection method
KR20200103150A (en) * 2019-02-11 2020-09-02 (주) 인텍플러스 Visual inspection system
CN112270659A (en) * 2020-08-31 2021-01-26 中国科学院合肥物质科学研究院 Rapid detection method and system for surface defects of pole piece of power battery
US20210209739A1 (en) * 2018-08-27 2021-07-08 Beijing Baidu Netcom Science And Technology Co., Ltd. Battery detection method and device
CN113592845A (en) * 2021-08-10 2021-11-02 深圳市华汉伟业科技有限公司 Defect detection method and device for battery coating and storage medium
CN113658176A (en) * 2021-09-07 2021-11-16 重庆科技学院 Ceramic tile surface defect detection method based on interactive attention and convolutional neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210209739A1 (en) * 2018-08-27 2021-07-08 Beijing Baidu Netcom Science And Technology Co., Ltd. Battery detection method and device
KR20200103150A (en) * 2019-02-11 2020-09-02 (주) 인텍플러스 Visual inspection system
CN111598861A (en) * 2020-05-13 2020-08-28 河北工业大学 Improved Faster R-CNN model-based non-uniform texture small defect detection method
CN112270659A (en) * 2020-08-31 2021-01-26 中国科学院合肥物质科学研究院 Rapid detection method and system for surface defects of pole piece of power battery
CN113592845A (en) * 2021-08-10 2021-11-02 深圳市华汉伟业科技有限公司 Defect detection method and device for battery coating and storage medium
CN113658176A (en) * 2021-09-07 2021-11-16 重庆科技学院 Ceramic tile surface defect detection method based on interactive attention and convolutional neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHI YUEN WONG ET.AL: "Technical data-driven tool condition monitoring challenges for CNC milling: a review", 《THE INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY VOLUME》, pages 4837 - 4857 *
王露: "基于机器视觉的锂电池极片缺陷检测与分类系统", 《万方学位论文》, pages 1 - 64 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116125803A (en) * 2022-12-28 2023-05-16 淮阴工学院 Inverter backstepping fuzzy neural network control strategy based on extreme learning machine

Also Published As

Publication number Publication date
CN114418963B (en) 2023-12-05

Similar Documents

Publication Publication Date Title
CN110543878B (en) Pointer instrument reading identification method based on neural network
Rijal et al. Ensemble of deep neural networks for estimating particulate matter from images
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN103927534B (en) A kind of online visible detection method of coding character based on convolutional neural networks
CN110059589B (en) Iris region segmentation method in iris image based on Mask R-CNN neural network
CN108090472B (en) Pedestrian re-identification method and system based on multi-channel consistency characteristics
CN109815814B (en) Face detection method based on convolutional neural network
CN108985337A (en) A kind of product surface scratch detection method based on picture depth study
CN109829414B (en) Pedestrian re-identification method based on label uncertainty and human body component model
CN115937626B (en) Automatic generation method of paravirtual data set based on instance segmentation
CN113989716A (en) Method, system, equipment and terminal for detecting foreign object of underground coal mine conveyor belt
CN114418963A (en) Battery pole plate defect detection method based on machine vision
CN112364881A (en) Advanced sampling consistency image matching algorithm
CN113421210B (en) Surface point Yun Chong construction method based on binocular stereoscopic vision
CN111160100A (en) Lightweight depth model aerial photography vehicle detection method based on sample generation
CN114445615A (en) Rotary insulator target detection method based on scale invariant feature pyramid structure
CN113538342A (en) Convolutional neural network-based quality detection method for coating of aluminum aerosol can
CN107529647B (en) Cloud picture cloud amount calculation method based on multilayer unsupervised sparse learning network
CN110765900B (en) Automatic detection illegal building method and system based on DSSD
CN113420776A (en) Multi-side joint detection article classification method based on model fusion
CN110458120B (en) Method and system for identifying different vehicle types in complex environment
Shi et al. Research on wire rope wear detection based on computer vision
Jiang et al. Pseudo‐Siamese residual atrous pyramid network for multi‐focus image fusion
CN114445726B (en) Sample library establishing method and device based on deep learning
CN115713633A (en) Visual SLAM method, system and storage medium based on deep learning in dynamic scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant