CN117372812A - Intelligent identification method for battery CT image pole piece alignment degree based on network learning - Google Patents

Intelligent identification method for battery CT image pole piece alignment degree based on network learning Download PDF

Info

Publication number
CN117372812A
CN117372812A CN202311401396.6A CN202311401396A CN117372812A CN 117372812 A CN117372812 A CN 117372812A CN 202311401396 A CN202311401396 A CN 202311401396A CN 117372812 A CN117372812 A CN 117372812A
Authority
CN
China
Prior art keywords
image
convolution
module
attention
battery
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311401396.6A
Other languages
Chinese (zh)
Inventor
夏迪梦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weituo Precision Technology Co.,Ltd.
Original Assignee
Shenzhen Weituo Navigation Technology Partnership Enterprise LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Weituo Navigation Technology Partnership Enterprise LP filed Critical Shenzhen Weituo Navigation Technology Partnership Enterprise LP
Priority to CN202311401396.6A priority Critical patent/CN117372812A/en
Publication of CN117372812A publication Critical patent/CN117372812A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent recognition method for the alignment degree of battery CT image pole pieces based on network learning, which comprises the following steps: acquiring a CT image of a battery to be identified; the battery to be identified comprises anode pole pieces and cathode pole pieces which are sequentially and alternately arranged; inputting the CT image into a trained segmentation network model to obtain a segmentation image; the stripe pattern exists in the segmented image; the adjacent extremely poor and other data of the battery pole pieces can be obtained according to the segmentation image; the trained segmentation network model comprises an encoder and a decoder, wherein the encoder comprises a downsampling convolution module and a channel space attention module, and the decoder comprises an upsampling convolution module and an output module. The method combines the channel space attention module to the coding and decoding model, brings small calculated amount and parameter amount through the channel space attention module, and greatly improves the segmentation performance of the model, so that a CT image is segmented to obtain a segmented image, and the alignment degree of the pole pieces is obtained through a bar pattern in the segmented image.

Description

Intelligent identification method for battery CT image pole piece alignment degree based on network learning
Technical Field
The invention relates to the technical field of CT image recognition of batteries, in particular to an intelligent recognition method for battery CT image pole piece alignment based on network learning.
Background
With the continuous expansion of new energy battery markets, the attention to battery safety is also increased. In the mass production process, the cathode and anode detection of the new energy battery is automatically, accurately and continuously performed by utilizing Computer Tomography (CT). After the CT image is obtained, the CT image is processed, so that the cathode and anode detection of the battery is realized, and the CT image processing methods are of two types:
the first type is based on traditional image processing algorithms, including segmentation according to gray threshold values, edge detection extraction and the like. Such algorithms rely heavily on image quality and manual parameter tuning in advance. When the image quality is not good enough and parameter adjustment parameters are not selected properly, the algorithm is invalid, so that the results of misjudgment and missed judgment are caused, and the precision is low.
A second class of image processing algorithms based on web learning. For example, in the patent document with publication number CN116205851a, a deep learning algorithm model for detecting positive and negative poles is adopted to detect positive and negative poles, a large amount of data such as training set and test set is required, and the algorithm relies on the accuracy of manual labeling images, the robustness and generalization capability of the network, and is low in efficiency.
In a word, it is difficult to combine accuracy and efficiency in the existing cathode and anode detection based on CT images.
Accordingly, the prior art is still in need of improvement and development.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an intelligent identification method for the alignment degree of a battery CT image pole piece based on network learning, which aims at solving the problem that the accuracy and the efficiency are difficult to be compatible in the cathode and anode detection based on CT images in the prior art.
The technical scheme adopted for solving the technical problems is as follows:
an intelligent recognition method for battery CT image pole piece alignment based on network learning, comprising the following steps:
acquiring a CT image of a battery to be identified; wherein, wait to discern the battery includes: a plurality of anode pole pieces and a plurality of cathode pole pieces which are alternately arranged in sequence;
inputting the CT image into a trained segmentation network model to obtain a segmentation image; wherein, a plurality of strip patterns exist in the segmented image, and the vertexes of the anode pole piece and the vertexes of the corresponding cathode pole piece are respectively positioned at the vertexes of two long sides of the strip patterns; the trained segmentation network model comprises: an encoder and decoder, the encoder comprising: a downsampling convolution module and a channel spatial attention module, the decoder comprising: an up-sampling convolution module and an output module.
The battery CT image pole piece alignment intelligent identification method based on network learning is characterized in that the trained segmentation network model is obtained by training the following steps:
acquiring a CT image of a battery, and dividing the CT image based on the vertexes of a cathode pole piece and the corresponding anode pole piece in the CT image to obtain a label image;
simultaneously inputting the CT image into a segmentation network model to obtain a predicted image;
and comparing the predicted image with the label image, and updating parameters of the segmentation network model to obtain a trained segmentation network model.
According to the intelligent recognition method for the battery CT image pole piece alignment based on network learning, when the CT image is a positive and negative balanced sample with balanced target and background, the loss function of the segmentation network model is as follows:
when the CT image is a positive and negative unbalanced sample, the loss function of the segmentation network model is as follows:
wherein,representing a loss function->Representing a binary cross entropy loss function,representing aggregate disparity loss function, +.>Representing the number of CT images +.>Represent the firstiPersonal label image->Representing a logarithmic function>Representing Sigmoid function->Represent the firstiAnd predicting the image.
The battery CT image pole piece alignment intelligent identification method based on network learning, wherein the number of the downsampling modules is four, and the downsampling convolution module comprises: two first convolution layers and a pooling layer; the channel space attention modules are four; the up-sampling module has four, the up-sampling module includes: two first convolution layers and one deconvolution layer;
inputting the CT image into a segmentation network model to obtain a predicted image, wherein the method comprises the following steps:
inputting the CT image into a first downsampling convolution module to obtain a first convolution image;
inputting the first convolution image into a first channel space attention module to obtain a first attention image;
inputting the first attention image into a second downsampling convolution module to obtain a second convolution image;
inputting the second convolution image into a second channel space attention module to obtain a second attention image;
inputting the second attention image into a third downsampling convolution module to obtain a third convolution image;
inputting the third convolution image into a third channel space attention module to obtain a third attention image;
inputting the third attention image into a fourth downsampling convolution module to obtain a fourth convolution image;
inputting the fourth convolution image into a fourth channel space attention module to obtain a fourth attention image;
inputting the fourth attention image into a first up-sampling convolution module to obtain a fifth convolution image;
inputting the fifth convolution image into a second up-sampling convolution module to obtain a sixth convolution image;
inputting the sixth convolution image into a third up-sampling convolution module to obtain a seventh convolution image;
inputting the seventh convolution image into a fourth up-sampling convolution module to obtain an eighth convolution image;
and inputting the eighth convolution image into an output module to obtain a predicted image.
The intelligent recognition method for the alignment degree of the battery CT image pole piece based on network learning, wherein the channel space attention module comprises the following steps: a channel attention layer and a spatial attention layer;
inputting the first convolution image into a first channel spatial attention module to obtain a first attention image, including:
inputting the first convolution image into a channel attention layer to obtain a channel attention image;
determining a channel image from the first convolution image and the channel attention image;
inputting the channel image into a spatial attention layer to obtain a spatial attention image;
and obtaining a first attention image according to the channel image and the spatial attention image.
The intelligent recognition method for the alignment degree of the battery CT image pole piece based on network learning, wherein the output module comprises the following steps: two first convolution layers and one second convolution layer; the first convolution layer is a3 x 3 convolution operation and the second convolution layer is a1 x 1 convolution operation.
The intelligent recognition method for the alignment degree of the battery CT image pole piece based on the network learning comprises the following steps:
extracting the positions of the cathode poles and the positions of the anode poles of the bar patterns in the segmented image;
determining adjacent polar differences of the strip patterns according to the positions of the cathode poles and the anode poles of the strip patterns;
determining anode pole differences of all strip patterns according to the positions of anode poles in the strip patterns;
the cathode pole difference of the bar patterns is determined for the locations of the cathode poles in all bar patterns.
An intelligent recognition system for battery CT image pole piece alignment based on network learning, comprising:
the acquisition module is used for acquiring CT images of the battery to be identified; wherein, wait to discern the battery includes: a plurality of anode pole pieces and a plurality of cathode pole pieces which are alternately arranged in sequence;
the segmentation module is used for inputting the CT image into a trained segmentation network model to obtain a segmentation image; wherein, a plurality of strip patterns exist in the segmented image, and the vertexes of the anode pole piece and the vertexes of the corresponding cathode pole piece are respectively positioned at the vertexes of two long sides of the strip patterns; the trained segmentation network model comprises: an encoder and decoder, the encoder comprising: a downsampling convolution module and a channel spatial attention module, the decoder comprising: an up-sampling convolution module and an output module.
The intelligent recognition system for the alignment degree of the battery CT image pole piece based on network learning comprises the following components:
the post-processing module is used for extracting the positions of the cathode poles and the positions of the anode poles of the bar patterns in the segmented image; determining adjacent polar differences of the strip patterns according to the positions of the cathode poles and the anode poles of the strip patterns; determining anode pole differences of all strip patterns according to the positions of anode poles in the strip patterns; the cathode pole difference of the bar patterns is determined for the locations of the cathode poles in all bar patterns.
A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of the intelligent recognition method for battery CT image pole piece alignment based on network learning as described in any one of the above.
A computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the network learning based battery CT image pole piece alignment intelligent identification method as described in any of the above.
The beneficial effects are that: the channel space attention module is combined to the U-net model, small calculated amount and parameter amount are brought by the channel space attention module, and the segmentation performance of the U-net model is greatly improved, so that a CT image is segmented to obtain a segmented image, and the alignment degree of the pole pieces is obtained through a strip-shaped pattern in the segmented image. And (3) automatically analyzing the segmentation result (binarized image) to automatically obtain a pole piece alignment degree analysis result.
Drawings
Fig. 1 is a flowchart of a battery CT image pole piece alignment intelligent identification method based on network learning in an embodiment of the invention.
Fig. 2 is a CT image of a battery in an embodiment of the invention.
Fig. 3 is an image of an anode tab label in a CT image of a battery in an embodiment of the invention.
Fig. 4 is an image of a cathode tab label in a CT image of a battery in an embodiment of the invention.
Fig. 5 is a schematic representation of labeling a CT image of a battery in an embodiment of the invention.
Fig. 6 is a label image in an embodiment of the invention.
FIG. 7 is a schematic diagram of a split network model in an embodiment of the invention.
Fig. 8 is a segmented image in an embodiment of the invention.
Fig. 9 is an image formed by superimposing a CT image and a label image in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clear and clear, the present invention will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1-9, the present invention provides some embodiments of a battery CT image pole piece alignment intelligent recognition method based on network learning.
As shown in fig. 1, the intelligent identification method for the battery CT image pole piece alignment degree based on the network learning according to the embodiment of the present invention includes the following steps:
step S100, acquiring a CT image of a battery to be identified; wherein, wait to discern the battery includes: a plurality of anode pole pieces and a plurality of cathode pole pieces which are alternately arranged in sequence.
Step 200, inputting the CT image into a trained segmentation network model to obtain a segmentation image; wherein, a plurality of bar patterns exist in the segmented image, and the vertexes of the anode pole piece and the vertexes of the corresponding cathode pole piece are respectively positioned at two vertexes on the same side of the bar patterns; the trained segmentation network model comprises: an encoder and decoder, the encoder comprising: a downsampling convolution module and a channel spatial attention module (CBAM module, convolutional Block Attention Module), the decoder comprising: an up-sampling convolution module and an output module.
Specifically, the battery to be identified is an aluminum shell battery core, a plurality of anode pole pieces and a plurality of cathode pole pieces exist in the battery to be identified, and the pole pieces (including the anode pole pieces and the cathode pole pieces) are rectangular in section (of course, the length of the anode pole pieces is the same as the length of the cathode pole pieces and the width of the anode pole pieces is the same as the width of the cathode pole pieces because the pole pieces can be deformed under pressure to form a bending structure or a bending structure). The dislocation distance between two adjacent anode pole pieces is smaller, the dislocation distance between two adjacent cathode pole pieces is smaller, but the dislocation distance (adjacent range) between the adjacent anode pole pieces and the cathode pole pieces is larger, risks exist in the charging and discharging processes, batteries are required to be identified, and batteries with larger dislocation distances between the adjacent anode pole pieces and the cathode pole pieces are selected as defective products. The number of anode sheets X and the number of cathode sheets Y may be equal to each other, or the difference may be one (i.e., X-y=1, or Y-x=1), as shown in fig. 2, the sheets with edges extending are denoted as anode sheets, the sheets with edges not extending are denoted as cathode sheets, the number of anode sheets is 52, and the number of cathode sheets is 51.
The CT image of the battery to be identified is acquired through an online battery detection CT device, then the CT image is input into a trained segmentation network model to obtain a segmentation image, and the alignment degree of the battery to be identified is determined through the segmentation image. The segmented image is segmented into a plurality of strip patterns from the CT image, and the strip patterns are in strip shapes and are similar to rectangles. The number of bar patterns z=min (X, Y), as shown in fig. 6, the number of bar patterns is 51, and the bar patterns are located at positions between adjacent two anode electrode sheets, that is, the bar patterns correspond to extensions as cathode electrode sheets, so that the cathode electrode sheets are aligned with the cathode electrode sheets after being combined with the bar patterns, and thus the alignment degree of the electrode sheets of the battery can be determined by the bar patterns.
The divided image may be a binarized image, for example, the pixel corresponding to the bar pattern is 1, and the pixels other than the bar pattern in the divided image are 0. The width of the stripe pattern may be the width of the pole piece.
The intelligent recognition method for the alignment degree of the battery CT image pole piece based on network learning further comprises the following steps:
and step S300, extracting the positions of the cathode poles and the positions of the anode poles of the bar patterns in the segmented image.
Specifically, the condition judgment is performed pixel by pixel from the binarized image obtained after the segmentation, and the condition is as follows: pixel position (i, j), image F:
if F (i, j) < F (i, j+1) and F (i-1, j) < F (i-1, j+1), then the pixel (i, j) is a cathode pole;
if F (i, j) > F (i, j+1) and F (i-1, j) > F (i-1, j+1), then pixel (i, j) is an anode pole;
at this time, the cathode pole sequence and the anode pole sequence are repeated at the same point, the points in the middle of the same ordinate point are selected to represent the coordinates of the cathode or anode pole;
and step S400, determining adjacent polar differences of the strip-shaped patterns according to the positions of the cathode poles and the anode poles of the strip-shaped patterns.
Specifically, after screening, sequencing a cathode pole and an anode pole, and solving a horizontal coordinate difference value of two paired cathode poles and anode poles, namely, adjacent polar differences;
and S500, determining anode pole differences of all strip patterns according to the positions of anode poles in the strip patterns.
Specifically, the points in the anode pole sequence are ordered according to the abscissa, and the difference between the minimum and maximum abscissas is the anode pole difference.
Step S600, determining the cathode polar difference of all the bar patterns according to the positions of the cathode poles in the bar patterns.
Specifically, ordering the points in the cathode pole sequence according to the abscissa, wherein the difference value between the minimum and maximum abscissas is the cathode pole difference;
specifically, the alignment degree of the cells to be identified may employ at least one of adjacent polar differences, cathode polar differences, and anode polar differences. As shown in fig. 5, the adjacent rangeRefers to the distance between the vertexes of the adjacent anode pole pieces and the vertexes of the cathode pole pieces, and the strip patterns are similarThe rectangular structure is similar to a rectangular structure, the distance between the two long-side vertexes of the strip pattern is taken as the adjacent extremely poor, the long-side vertexes are only two vertexes on the long side of the strip pattern, the long side of the strip pattern is two, and the selection of which long side depends on the label image in the training of the segmentation network model, for example, when the CT image (shown in figure 2) is marked to obtain a label image, the adjacent anode pole piece and the cathode pole piece form a pole piece group, and each pole piece group forms one strip pattern. As shown in fig. 3 and 5, the top right corner of each anode pole piece (i.e., the anode pole piece faces the top of the corresponding cathode pole piece, where the anode pole piece and the cathode pole piece are pole pieces in the same pole piece group) is marked, as shown in fig. 4 and 5, the top right corner of each cathode pole piece (i.e., the cathode pole piece faces the top of the corresponding anode pole piece, where the anode pole piece and the cathode pole piece are pole pieces in the same pole piece group) is marked, and a bar pattern (such as a gray part in fig. 5) in the label image is formed based on the two top corners and the anode pole piece, so as to obtain the label image (such as shown in fig. 6), and fig. 9 is an image formed by overlapping the label image and the CT image. In the labeling process, when two labeled vertexes are positioned at the long edge of the upper side of the bar pattern in the label image, and the adjacent range is determined, the distance between the two vertexes of the upper side of the bar pattern in the segmented image is also taken as the adjacent range
As shown in FIG. 5, the anode electrode is poorMeans the maximum distance of the top points of all anode pole pieces along the length direction of the pole pieces, the cathode pole difference +.>Refers to the maximum distance of the top points of all cathode pole pieces along the length direction of the pole pieces. After labeling the vertexes of the polar plates, two rows of vertexes can be obtained, wherein the vertex at the left side is a cathode pole, and the vertex at the right side is an anode pole. The cathodes are obtained from the positions of all the cathode poles, i.e. the vertices to the left of the bar patternVery bad->The method comprises the steps of carrying out a first treatment on the surface of the The anode difference is obtained from the positions of all anode poles (i.e., the apexes on the right side of the bar pattern)>
Based on the trained segmentation network model, the CT image (shown in fig. 2) can be input into the trained segmentation network model and then the segmentation image (shown in fig. 8) is output, and the segmentation image (shown in fig. 8) is close to the labeling image (shown in fig. 6). As shown in fig. 5 and 8, the positions of the cathode and anode points can be determined from the segmented image, as shown in fig. 3 and 4. After the coordinates were obtained, the adjacent polar difference, the cathode polar difference and the anode polar difference were calculated, and specific data are shown in table 1 below.
TABLE 1 positions of anode and cathode poles, adjacent pole difference, anode pole difference, cathode pole difference
As shown in fig. 7, the trained split network model adopts a U-net network structure, which specifically includes: an encoder and a decoder. The encoder includes: the system comprises a plurality of downsampling convolution modules and a plurality of channel space attention modules, wherein each downsampling convolution module is followed by a corresponding channel space attention module. The decoder includes: the device comprises a plurality of up-sampling convolution modules and an output module, wherein the number of the up-sampling convolution modules is the same as that of the down-sampling convolution modules. The first downsampling convolution module forms jump connection with the output module, the second downsampling convolution module forms jump connection with the first-to-last upsampling convolution module, and so on, and the first-to-last downsampling convolution module forms jump connection with the second upsampling convolution module.
The channel space attention module is combined to the U-net model, small calculated amount and parameter amount are brought by the channel space attention module, and the segmentation performance of the U-net model is greatly improved, so that a CT image is segmented to obtain a segmented image, and the alignment degree of the pole pieces is obtained through a strip-shaped pattern in the segmented image.
The intelligent recognition method of the battery CT image pole piece alignment degree based on network learning can be applied to CT assembly line operation, a scanning mode and a feeding and discharging mode are set for a CT machine, a corner is scanned for the same battery, a region of interest is selected, namely a specific layer is selected. After the area is selected, the image is adjusted to be in the horizontal direction according to the image direction, and a specific area is selected, and in the present example, the size of the area of interest is 522×819 pixels, as shown in fig. 2. After the region of interest is selected, the pole piece is manually segmented for 10 sample images, as shown in fig. 6.
The training of the segmentation network model is to make the training of the segmentation network model segment the input CT image and output a segmented image, wherein the segmented image is close to the labeling image. The trained segmentation network model is obtained by training the following steps:
and step A100, acquiring a CT image of the battery, and manually dividing the CT image based on the vertexes of the cathode pole piece and the corresponding anode pole piece in the CT image to obtain a label image.
And step A200, inputting the CT image into a segmentation network model to obtain a predicted image.
And step A300, comparing the predicted image with the label image, and updating parameters of the segmentation network model to obtain a trained segmentation network model.
Specifically, after a CT image of a battery is acquired, for each pole piece group, determining the vertex of the anode pole piece facing the side where the cathode pole piece is located in the pole piece group and the vertex of the cathode pole piece facing the side where the anode pole piece is located, and dividing the CT image according to the two vertices to obtain a label image. The label image is provided with a plurality of strip patterns, and the vertexes of the anode pole piece and the vertexes of the corresponding cathode pole piece are respectively positioned at the vertexes of two long sides of the strip patterns.
As shown in fig. 6, the label image may be a binarized image, for example, the pixel corresponding to the bar pattern is 1, and the pixels other than the bar pattern in the divided image are 0. The width of the stripe pattern may be the width of the pole piece. Training the segmentation network model according to the output image and the label image output by the segmentation network model, and continuously inputting the CT image into the segmentation network model to obtain an output image until the training condition is reached, so as to obtain the trained segmentation network model. The label image can be obtained by adopting a manual segmentation mode.
Step a100 includes:
and step A110, determining a plurality of pole piece groups, marking the vertexes of the anode pole pieces in the pole piece groups, and marking the vertexes of the cathode pole pieces in the pole piece groups.
And step A120, determining the pole pieces extending from the inner edges of the pole piece groups, and determining the strip-shaped patterns according to the pole pieces extending from the inner edges of the pole piece groups and the positions of the two vertexes so as to obtain the label image.
Specifically, as shown in fig. 3 to 6, there are a plurality of stripe patterns in the label image, the stripe patterns are in a (approximately) quadrilateral shape, in order to determine the stripe patterns, two vertexes of two pole pieces in the pole piece group are determined first, the two vertexes are located at two ends of the extending edge, and the extending edge is not completely straight and may be curved due to the fact that the pole pieces may be deformed under pressure, so that the stripe patterns need to be determined according to the pole pieces extending from the edge and the positions of the two vertexes. The vertexes of the two pole pieces in the pole piece group, namely the vertex of the anode pole piece facing the corresponding cathode pole piece and the vertex of the corresponding cathode pole piece facing the anode pole piece, are respectively positioned at the two vertexes on the same side of the strip pattern.
When the CT image is a positive and negative balanced sample, the loss function of the segmentation network model is as follows:
when the CT image is a positive and negative unbalanced sample, the loss function of the segmentation network model is as follows:
wherein,representing a loss function->Representing a binary cross entropy loss function,representing aggregate disparity loss function, +.>Representing the number of CT images +.>Represent the firstiPersonal label image->Representing a logarithmic function>Representing Sigmoid function->Represent the firstiAnd predicting the image.
The imbalance of positive and negative samples causes problems: most of the training processes are simple and easily-separated negative samples (samples belonging to the background), so that the training processes cannot fully learn information belonging to the classified samples; secondly, the simple and easily-divided negative samples are too many, and the effect of other types of samples can be covered (the simple and easily-divided negative samples still generate loss with a certain amplitude, and the large number of the simple and easily-divided negative samples can mainly contribute to the loss, so that the gradient updating direction is dominant, important information is covered), and therefore, different loss functions are used for training the segmentation network model.
During training, partial network parameters are selected: the number of data set training epoch=100; batch size=1; network optimization algorithm selects RMSProp, learning rate=1e -5 Weight decay weight_decay=1e -8 Momentum=0.999.
The number of the downsampling modules is four, and the downsampling convolution module comprises: two first convolution layers and a pooling layer; the channel space attention module has four, the upsampling convolution module includes: two first convolution layers and one deconvolution layer.
Step A200 specifically includes:
step A201, inputting the CT image into a first downsampling convolution module to obtain a first convolution image.
Step A202, inputting the first convolution image into a first channel space attention module to obtain a first attention image.
And step A203, inputting the first attention image into a second downsampling convolution module to obtain a second convolution image.
And step A204, inputting the second convolution image into a second channel space attention module to obtain a second attention image.
And step A205, inputting the second attention image into a third downsampling convolution module to obtain a third convolution image.
And step A206, inputting the third convolution image into a third channel space attention module to obtain a third attention image.
And step A207, inputting the third attention image into a fourth downsampling convolution module to obtain a fourth convolution image.
And step A208, inputting the fourth convolution image into a fourth channel space attention module to obtain a fourth attention image.
Step a209, inputting the fourth attention image into a first up-sampling convolution module to obtain a fifth convolution image.
And step A210, inputting the fifth convolution image into a second up-sampling convolution module to obtain a sixth convolution image.
And step A211, inputting the sixth convolution image into a third up-sampling convolution module to obtain a seventh convolution image.
And step A212, inputting the seventh convolution image into a fourth up-sampling convolution module to obtain an eighth convolution image.
And step A213, inputting the eighth convolution image into an output module to obtain a predicted image.
Specifically, the CT image is input into a downsampling convolution module to obtain a convolution image, the convolution image is processed through a channel space attention module to obtain an attention image, and the next downsampling convolution module is continuously input until the attention image is obtained through the last channel space attention module. The attention image output by the encoder is input into an up-sampling convolution module to obtain a convolution image, and is input into a next up-sampling convolution module continuously until the convolution image is obtained through a last up-sampling convolution module, and then is input into an output module to obtain a prediction image.
The channel spatial attention module includes: a channel attention layer and a spatial attention layer.
Step A202 specifically includes:
step a2021, inputting the first convolution image into a channel attention layer, so as to obtain a channel attention image.
Step a2022, determining a channel image from the first convolution image and the channel attention image.
Step A2023, inputting the channel image into a spatial attention layer to obtain a spatial attention image.
Step a2024, obtaining a first attention image according to the channel image and the spatial attention image.
Specifically, when the convolution image is input into the channel space attention module to output an attention image, the convolution image is input into the channel attention layer to obtain the channel attention image; obtaining a channel image according to the channel attention image and the convolution image (the channel attention image and the convolution image are multiplied to obtain the channel image); inputting the channel image into a spatial attention layer to obtain a spatial attention image; and finally, obtaining a space image according to the channel image and the space attention image (the space image is obtained by multiplying the channel image and the space attention image), and taking the space image as the attention image output by the channel space attention module.
Wherein,represents the channel attention layer, +.>Representing the activation function Sigmoid->A convolution image is represented and,representing the average pooling layer, ">Representing a multi-layer sensor->Representing the maximum pooling layer,>、/>all represent weights, ++>Representing the average feature map obtained by the channel average pooling layer +.>The maximum feature map obtained by the channel maximum pooling layer is shown.
Wherein,representing the spatial attention layer, +.>Representing the activation function Sigmoid->A channel image is represented and is displayed,representing the average pooling layer, ">Representing the maximum pooling layer,>representation->Convolution operation of->Representing the average feature map obtained by spatially averaging the pooling layer +.>The maximum feature map obtained by the spatial maximization pooling layer is shown.
Step a204 specifically includes:
and step A2041, inputting the second convolution image into a channel attention layer to obtain a channel attention image.
Step a2042, determining a channel image from the second convolution image and the channel attention image.
And step A2043, inputting the channel image into a spatial attention layer to obtain a spatial attention image.
And step A2044, obtaining a second attention image according to the channel image and the spatial attention image.
Specifically, the specific processing procedures of step a204, step a206, and step a208 refer to step a202.
The output module includes: two first convolution layers and one second convolution layer; the first convolution layer is a3 x 3 convolution operation and the second convolution layer is a1 x 1 convolution operation.
The invention further provides an embodiment of a battery CT image pole piece alignment intelligent recognition system based on network learning, which is based on the battery CT image pole piece alignment intelligent recognition method based on network learning.
The invention relates to an intelligent recognition system for battery CT image pole piece alignment degree based on network learning, which comprises the following components:
the acquisition module is used for acquiring CT images of the battery to be identified; wherein, wait to discern the battery includes: a plurality of anode pole pieces and a plurality of cathode pole pieces which are alternately arranged in sequence;
the segmentation module is used for inputting the CT image into a trained segmentation network model to obtain a segmentation image; wherein, a plurality of strip patterns exist in the segmented image, and the vertexes of the anode pole piece and the vertexes of the corresponding cathode pole piece are respectively positioned at the vertexes of two long sides of the strip patterns; the trained segmentation network model comprises: an encoder and decoder, the encoder comprising: a downsampling convolution module and a channel spatial attention module, the decoder comprising: an up-sampling convolution module and an output module.
The invention relates to an intelligent recognition system for battery CT image pole piece alignment based on network learning, which further comprises:
the post-processing module is used for extracting the positions of the cathode poles and the positions of the anode poles of the bar patterns in the segmented image; determining adjacent polar differences of the strip patterns according to the positions of the cathode poles and the anode poles of the strip patterns; determining anode pole differences of all strip patterns according to the positions of anode poles in the strip patterns; the cathode pole difference of the bar patterns is determined for the locations of the cathode poles in all bar patterns.
Based on the intelligent recognition method of the battery CT image pole piece alignment degree based on the network learning in any embodiment, the invention also provides an embodiment of computer equipment:
the computer equipment comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the intelligent recognition method for the battery CT image pole piece alignment degree based on the network learning according to any one of the embodiments when executing the computer program.
The invention further provides an embodiment of a computer readable storage medium based on the intelligent recognition method of the battery CT image pole piece alignment degree based on the network learning.
The computer readable storage medium of the present invention stores a computer program thereon, which when executed by a processor, implements the steps of the intelligent recognition method for battery CT image pole piece alignment based on network learning according to any one of the embodiments.
It is to be understood that the invention is not limited in its application to the examples described above, but is capable of modification and variation in light of the above teachings by those skilled in the art, and that all such modifications and variations are intended to be included within the scope of the appended claims.

Claims (10)

1. The intelligent recognition method for the alignment degree of the battery CT image pole piece based on the network learning is characterized by comprising the following steps:
acquiring a CT image of a battery to be identified; wherein, wait to discern the battery includes: a plurality of anode pole pieces and a plurality of cathode pole pieces which are alternately arranged in sequence;
inputting the CT image into a trained segmentation network model to obtain a segmentation image; wherein, a plurality of strip patterns exist in the segmented image, and the vertexes of the anode pole piece and the vertexes of the corresponding cathode pole piece are respectively positioned at the vertexes of two long sides of the strip patterns; the trained segmentation network model comprises: an encoder and decoder, the encoder comprising: a downsampling convolution module and a channel spatial attention module, the decoder comprising: an up-sampling convolution module and an output module.
2. The intelligent recognition method for the alignment degree of the battery CT image pole pieces based on network learning according to claim 1, wherein the trained segmentation network model is obtained by training the following steps:
acquiring a CT image of a battery, and manually dividing the CT image based on the vertexes of a cathode pole piece and the corresponding anode pole piece in the CT image to obtain a label image;
simultaneously inputting the CT image into a segmentation network model to obtain a predicted image;
and comparing the predicted image with the label image, and updating parameters of the segmentation network model to obtain a trained segmentation network model.
3. The intelligent recognition method for the alignment degree of the battery CT image pole piece based on the network learning according to claim 2, wherein when the CT image is a positive and negative balanced sample of target and background balance, the loss function of the segmentation network model is as follows:
when the CT image is a positive and negative unbalanced sample, the loss function of the segmentation network model is as follows:
wherein,representing a loss function->Representing a binary cross entropy loss function->Representing aggregate disparity loss function, +.>Representing the number of CT images +.>Represent the firstiPersonal label image->Representing a logarithmic function>Representing Sigmoid function->Represent the firstiAnd predicting the image.
4. The intelligent recognition method for the alignment degree of battery CT image pole pieces based on network learning according to claim 2, wherein the number of the downsampling modules is four, and the downsampling convolution module comprises: two first convolution layers and a pooling layer; the channel space attention modules are four; the up-sampling module has four, the up-sampling convolution module includes: two first convolution layers and one deconvolution layer;
inputting the CT image into a segmentation network model to obtain a predicted image, wherein the method comprises the following steps:
inputting the CT image into a first downsampling convolution module to obtain a first convolution image;
inputting the first convolution image into a first channel space attention module to obtain a first attention image;
inputting the first attention image into a second downsampling convolution module to obtain a second convolution image;
inputting the second convolution image into a second channel space attention module to obtain a second attention image;
inputting the second attention image into a third downsampling convolution module to obtain a third convolution image;
inputting the third convolution image into a third channel space attention module to obtain a third attention image;
inputting the third attention image into a fourth downsampling convolution module to obtain a fourth convolution image;
inputting the fourth convolution image into a fourth channel space attention module to obtain a fourth attention image;
inputting the fourth attention image into a first up-sampling convolution module to obtain a fifth convolution image;
inputting the fifth convolution image into a second up-sampling convolution module to obtain a sixth convolution image;
inputting the sixth convolution image into a third up-sampling convolution module to obtain a seventh convolution image;
inputting the seventh convolution image into a fourth up-sampling convolution module to obtain an eighth convolution image;
and inputting the eighth convolution image into an output module to obtain a predicted image.
5. The intelligent recognition method for the alignment degree of battery CT image pole pieces based on network learning according to claim 4, wherein the channel space attention module comprises: a channel attention layer and a spatial attention layer;
inputting the first convolution image into a first channel spatial attention module to obtain a first attention image, including:
inputting the first convolution image into a channel attention layer to obtain a channel attention image;
determining a channel image from the first convolution image and the channel attention image;
inputting the channel image into a spatial attention layer to obtain a spatial attention image;
and obtaining a first attention image according to the channel image and the spatial attention image.
6. The intelligent recognition method for the alignment degree of the battery CT image pole pieces based on the network learning according to claim 4, wherein the output module comprises: two first convolution layers and one second convolution layer; the first convolution layer is a3 x 3 convolution operation and the second convolution layer is a1 x 1 convolution operation.
7. The intelligent recognition method for the alignment degree of the battery CT image pole piece based on the network learning according to any one of claims 1 to 6, wherein the intelligent recognition method for the alignment degree of the battery CT image pole piece based on the network learning further comprises:
extracting the positions of the cathode poles and the positions of the anode poles of the bar patterns in the segmented image;
determining adjacent polar differences of the strip patterns according to the positions of the cathode poles and the anode poles of the strip patterns;
determining anode pole differences of all strip patterns according to the positions of anode poles in the strip patterns;
the cathode pole difference of the bar patterns is determined for the locations of the cathode poles in all bar patterns.
8. An intelligent recognition system for battery CT image pole piece alignment based on network learning is characterized by comprising:
the acquisition module is used for acquiring CT images of the battery to be identified; wherein, wait to discern the battery includes: a plurality of anode pole pieces and a plurality of cathode pole pieces which are alternately arranged in sequence;
the segmentation module is used for inputting the CT image into a trained segmentation network model to obtain a segmentation image; wherein, a plurality of strip patterns exist in the segmented image, and the vertexes of the anode pole piece and the vertexes of the corresponding cathode pole piece are respectively positioned at the vertexes of two long sides of the strip patterns; the trained segmentation network model comprises: an encoder and decoder, the encoder comprising: a downsampling convolution module and a channel spatial attention module, the decoder comprising: an up-sampling convolution module and an output module.
9. The intelligent recognition system for the alignment degree of battery CT image pole pieces based on network learning according to claim 8, comprising:
the post-processing module is used for extracting the positions of the cathode poles and the positions of the anode poles of the bar patterns in the segmented image; determining adjacent polar differences of the strip patterns according to the positions of the cathode poles and the anode poles of the strip patterns; determining anode pole differences of all strip patterns according to the positions of anode poles in the strip patterns; the cathode pole difference of the bar patterns is determined for the locations of the cathode poles in all bar patterns.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the intelligent recognition method for battery CT image pole piece alignment based on network learning as claimed in any one of claims 1 to 7.
CN202311401396.6A 2023-10-26 2023-10-26 Intelligent identification method for battery CT image pole piece alignment degree based on network learning Pending CN117372812A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311401396.6A CN117372812A (en) 2023-10-26 2023-10-26 Intelligent identification method for battery CT image pole piece alignment degree based on network learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311401396.6A CN117372812A (en) 2023-10-26 2023-10-26 Intelligent identification method for battery CT image pole piece alignment degree based on network learning

Publications (1)

Publication Number Publication Date
CN117372812A true CN117372812A (en) 2024-01-09

Family

ID=89396238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311401396.6A Pending CN117372812A (en) 2023-10-26 2023-10-26 Intelligent identification method for battery CT image pole piece alignment degree based on network learning

Country Status (1)

Country Link
CN (1) CN117372812A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974632A (en) * 2024-03-28 2024-05-03 大连理工大学 Lithium battery CT cathode-anode alignment detection method based on segmentation large model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974632A (en) * 2024-03-28 2024-05-03 大连理工大学 Lithium battery CT cathode-anode alignment detection method based on segmentation large model
CN117974632B (en) * 2024-03-28 2024-06-07 大连理工大学 Lithium battery CT cathode-anode alignment detection method based on segmentation large model

Similar Documents

Publication Publication Date Title
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN110310259B (en) Improved YOLOv3 algorithm-based knot defect detection method
CN111507976B (en) Defect detection method and system based on multi-angle imaging
CN112508090A (en) External package defect detection method
CN110647795A (en) Form recognition method
CN117372812A (en) Intelligent identification method for battery CT image pole piece alignment degree based on network learning
CN111242026B (en) Remote sensing image target detection method based on spatial hierarchy perception module and metric learning
CN114581446B (en) Battery core abnormity detection method and system of laminated battery
WO2021102741A1 (en) Image analysis method and system for immunochromatographic detection
CN115147418B (en) Compression training method and device for defect detection model
CN116416268B (en) Method and device for detecting edge position of lithium battery pole piece based on recursion dichotomy
CN115861307B (en) Fascia gun power supply driving plate welding fault detection method based on artificial intelligence
CN115861190A (en) Comparison learning-based unsupervised defect detection method for photovoltaic module
CN115184380A (en) Printed circuit board welding spot abnormity detection method based on machine vision
CN114693529B (en) Image splicing method, device and equipment and storage medium
CN115775236A (en) Surface tiny defect visual detection method and system based on multi-scale feature fusion
CN117557565B (en) Detection method and device for lithium battery pole piece
CN116721055A (en) Method and equipment for detecting alignment degree of cathode and anode plates of battery core of laminated lithium battery
CN113487563B (en) EL image-based self-adaptive detection method for hidden cracks of photovoltaic module
CN111339967A (en) Pedestrian detection method based on multi-view graph convolution network
CN117011281A (en) Deep learning-based battery overlap value anomaly detection method
CN115116026B (en) Automatic tracking method and system for logistics transfer robot
CN111429411B (en) X-ray defect image sample generation method for carbon fiber composite core wire
CN115937075A (en) Texture fabric flaw detection method and medium based on unsupervised mode
CN115330705A (en) Skin paint surface defect detection method based on adaptive weighting template NCC

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240401

Address after: 518000, 13A, Building 143-146, Dongquan New Village, Hongshan Community, Minzhi Street, Longhua District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen Weituo Precision Technology Co.,Ltd.

Country or region after: China

Address before: 413-C095, Nanhai Yikumeng Workshop Building, No.1 Gongye San Road, Shuiwan Community, Nanshan District, Shenzhen City, Guangdong Province, 518000

Applicant before: Shenzhen Weituo Navigation Technology Partnership Enterprise (Limited Partnership)

Country or region before: China