Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an intelligent identification method for the alignment degree of a battery CT image pole piece based on network learning, which aims at solving the problem that the accuracy and the efficiency are difficult to be compatible in the cathode and anode detection based on CT images in the prior art.
The technical scheme adopted for solving the technical problems is as follows:
an intelligent recognition method for battery CT image pole piece alignment based on network learning, comprising the following steps:
acquiring a CT image of a battery to be identified; wherein, wait to discern the battery includes: a plurality of anode pole pieces and a plurality of cathode pole pieces which are alternately arranged in sequence;
inputting the CT image into a trained segmentation network model to obtain a segmentation image; wherein, a plurality of strip patterns exist in the segmented image, and the vertexes of the anode pole piece and the vertexes of the corresponding cathode pole piece are respectively positioned at the vertexes of two long sides of the strip patterns; the trained segmentation network model comprises: an encoder and decoder, the encoder comprising: a downsampling convolution module and a channel spatial attention module, the decoder comprising: an up-sampling convolution module and an output module.
The battery CT image pole piece alignment intelligent identification method based on network learning is characterized in that the trained segmentation network model is obtained by training the following steps:
acquiring a CT image of a battery, and dividing the CT image based on the vertexes of a cathode pole piece and the corresponding anode pole piece in the CT image to obtain a label image;
simultaneously inputting the CT image into a segmentation network model to obtain a predicted image;
and comparing the predicted image with the label image, and updating parameters of the segmentation network model to obtain a trained segmentation network model.
According to the intelligent recognition method for the battery CT image pole piece alignment based on network learning, when the CT image is a positive and negative balanced sample with balanced target and background, the loss function of the segmentation network model is as follows:
;
when the CT image is a positive and negative unbalanced sample, the loss function of the segmentation network model is as follows:
;
;
;
;
wherein,representing a loss function->Representing a binary cross entropy loss function,representing aggregate disparity loss function, +.>Representing the number of CT images +.>Represent the firstiPersonal label image->Representing a logarithmic function>Representing Sigmoid function->Represent the firstiAnd predicting the image.
The battery CT image pole piece alignment intelligent identification method based on network learning, wherein the number of the downsampling modules is four, and the downsampling convolution module comprises: two first convolution layers and a pooling layer; the channel space attention modules are four; the up-sampling module has four, the up-sampling module includes: two first convolution layers and one deconvolution layer;
inputting the CT image into a segmentation network model to obtain a predicted image, wherein the method comprises the following steps:
inputting the CT image into a first downsampling convolution module to obtain a first convolution image;
inputting the first convolution image into a first channel space attention module to obtain a first attention image;
inputting the first attention image into a second downsampling convolution module to obtain a second convolution image;
inputting the second convolution image into a second channel space attention module to obtain a second attention image;
inputting the second attention image into a third downsampling convolution module to obtain a third convolution image;
inputting the third convolution image into a third channel space attention module to obtain a third attention image;
inputting the third attention image into a fourth downsampling convolution module to obtain a fourth convolution image;
inputting the fourth convolution image into a fourth channel space attention module to obtain a fourth attention image;
inputting the fourth attention image into a first up-sampling convolution module to obtain a fifth convolution image;
inputting the fifth convolution image into a second up-sampling convolution module to obtain a sixth convolution image;
inputting the sixth convolution image into a third up-sampling convolution module to obtain a seventh convolution image;
inputting the seventh convolution image into a fourth up-sampling convolution module to obtain an eighth convolution image;
and inputting the eighth convolution image into an output module to obtain a predicted image.
The intelligent recognition method for the alignment degree of the battery CT image pole piece based on network learning, wherein the channel space attention module comprises the following steps: a channel attention layer and a spatial attention layer;
inputting the first convolution image into a first channel spatial attention module to obtain a first attention image, including:
inputting the first convolution image into a channel attention layer to obtain a channel attention image;
determining a channel image from the first convolution image and the channel attention image;
inputting the channel image into a spatial attention layer to obtain a spatial attention image;
and obtaining a first attention image according to the channel image and the spatial attention image.
The intelligent recognition method for the alignment degree of the battery CT image pole piece based on network learning, wherein the output module comprises the following steps: two first convolution layers and one second convolution layer; the first convolution layer is a3 x 3 convolution operation and the second convolution layer is a1 x 1 convolution operation.
The intelligent recognition method for the alignment degree of the battery CT image pole piece based on the network learning comprises the following steps:
extracting the positions of the cathode poles and the positions of the anode poles of the bar patterns in the segmented image;
determining adjacent polar differences of the strip patterns according to the positions of the cathode poles and the anode poles of the strip patterns;
determining anode pole differences of all strip patterns according to the positions of anode poles in the strip patterns;
the cathode pole difference of the bar patterns is determined for the locations of the cathode poles in all bar patterns.
An intelligent recognition system for battery CT image pole piece alignment based on network learning, comprising:
the acquisition module is used for acquiring CT images of the battery to be identified; wherein, wait to discern the battery includes: a plurality of anode pole pieces and a plurality of cathode pole pieces which are alternately arranged in sequence;
the segmentation module is used for inputting the CT image into a trained segmentation network model to obtain a segmentation image; wherein, a plurality of strip patterns exist in the segmented image, and the vertexes of the anode pole piece and the vertexes of the corresponding cathode pole piece are respectively positioned at the vertexes of two long sides of the strip patterns; the trained segmentation network model comprises: an encoder and decoder, the encoder comprising: a downsampling convolution module and a channel spatial attention module, the decoder comprising: an up-sampling convolution module and an output module.
The intelligent recognition system for the alignment degree of the battery CT image pole piece based on network learning comprises the following components:
the post-processing module is used for extracting the positions of the cathode poles and the positions of the anode poles of the bar patterns in the segmented image; determining adjacent polar differences of the strip patterns according to the positions of the cathode poles and the anode poles of the strip patterns; determining anode pole differences of all strip patterns according to the positions of anode poles in the strip patterns; the cathode pole difference of the bar patterns is determined for the locations of the cathode poles in all bar patterns.
A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of the intelligent recognition method for battery CT image pole piece alignment based on network learning as described in any one of the above.
A computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the network learning based battery CT image pole piece alignment intelligent identification method as described in any of the above.
The beneficial effects are that: the channel space attention module is combined to the U-net model, small calculated amount and parameter amount are brought by the channel space attention module, and the segmentation performance of the U-net model is greatly improved, so that a CT image is segmented to obtain a segmented image, and the alignment degree of the pole pieces is obtained through a strip-shaped pattern in the segmented image. And (3) automatically analyzing the segmentation result (binarized image) to automatically obtain a pole piece alignment degree analysis result.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clear and clear, the present invention will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1-9, the present invention provides some embodiments of a battery CT image pole piece alignment intelligent recognition method based on network learning.
As shown in fig. 1, the intelligent identification method for the battery CT image pole piece alignment degree based on the network learning according to the embodiment of the present invention includes the following steps:
step S100, acquiring a CT image of a battery to be identified; wherein, wait to discern the battery includes: a plurality of anode pole pieces and a plurality of cathode pole pieces which are alternately arranged in sequence.
Step 200, inputting the CT image into a trained segmentation network model to obtain a segmentation image; wherein, a plurality of bar patterns exist in the segmented image, and the vertexes of the anode pole piece and the vertexes of the corresponding cathode pole piece are respectively positioned at two vertexes on the same side of the bar patterns; the trained segmentation network model comprises: an encoder and decoder, the encoder comprising: a downsampling convolution module and a channel spatial attention module (CBAM module, convolutional Block Attention Module), the decoder comprising: an up-sampling convolution module and an output module.
Specifically, the battery to be identified is an aluminum shell battery core, a plurality of anode pole pieces and a plurality of cathode pole pieces exist in the battery to be identified, and the pole pieces (including the anode pole pieces and the cathode pole pieces) are rectangular in section (of course, the length of the anode pole pieces is the same as the length of the cathode pole pieces and the width of the anode pole pieces is the same as the width of the cathode pole pieces because the pole pieces can be deformed under pressure to form a bending structure or a bending structure). The dislocation distance between two adjacent anode pole pieces is smaller, the dislocation distance between two adjacent cathode pole pieces is smaller, but the dislocation distance (adjacent range) between the adjacent anode pole pieces and the cathode pole pieces is larger, risks exist in the charging and discharging processes, batteries are required to be identified, and batteries with larger dislocation distances between the adjacent anode pole pieces and the cathode pole pieces are selected as defective products. The number of anode sheets X and the number of cathode sheets Y may be equal to each other, or the difference may be one (i.e., X-y=1, or Y-x=1), as shown in fig. 2, the sheets with edges extending are denoted as anode sheets, the sheets with edges not extending are denoted as cathode sheets, the number of anode sheets is 52, and the number of cathode sheets is 51.
The CT image of the battery to be identified is acquired through an online battery detection CT device, then the CT image is input into a trained segmentation network model to obtain a segmentation image, and the alignment degree of the battery to be identified is determined through the segmentation image. The segmented image is segmented into a plurality of strip patterns from the CT image, and the strip patterns are in strip shapes and are similar to rectangles. The number of bar patterns z=min (X, Y), as shown in fig. 6, the number of bar patterns is 51, and the bar patterns are located at positions between adjacent two anode electrode sheets, that is, the bar patterns correspond to extensions as cathode electrode sheets, so that the cathode electrode sheets are aligned with the cathode electrode sheets after being combined with the bar patterns, and thus the alignment degree of the electrode sheets of the battery can be determined by the bar patterns.
The divided image may be a binarized image, for example, the pixel corresponding to the bar pattern is 1, and the pixels other than the bar pattern in the divided image are 0. The width of the stripe pattern may be the width of the pole piece.
The intelligent recognition method for the alignment degree of the battery CT image pole piece based on network learning further comprises the following steps:
and step S300, extracting the positions of the cathode poles and the positions of the anode poles of the bar patterns in the segmented image.
Specifically, the condition judgment is performed pixel by pixel from the binarized image obtained after the segmentation, and the condition is as follows: pixel position (i, j), image F:
if F (i, j) < F (i, j+1) and F (i-1, j) < F (i-1, j+1), then the pixel (i, j) is a cathode pole;
if F (i, j) > F (i, j+1) and F (i-1, j) > F (i-1, j+1), then pixel (i, j) is an anode pole;
at this time, the cathode pole sequence and the anode pole sequence are repeated at the same point, the points in the middle of the same ordinate point are selected to represent the coordinates of the cathode or anode pole;
and step S400, determining adjacent polar differences of the strip-shaped patterns according to the positions of the cathode poles and the anode poles of the strip-shaped patterns.
Specifically, after screening, sequencing a cathode pole and an anode pole, and solving a horizontal coordinate difference value of two paired cathode poles and anode poles, namely, adjacent polar differences;
and S500, determining anode pole differences of all strip patterns according to the positions of anode poles in the strip patterns.
Specifically, the points in the anode pole sequence are ordered according to the abscissa, and the difference between the minimum and maximum abscissas is the anode pole difference.
Step S600, determining the cathode polar difference of all the bar patterns according to the positions of the cathode poles in the bar patterns.
Specifically, ordering the points in the cathode pole sequence according to the abscissa, wherein the difference value between the minimum and maximum abscissas is the cathode pole difference;
specifically, the alignment degree of the cells to be identified may employ at least one of adjacent polar differences, cathode polar differences, and anode polar differences. As shown in fig. 5, the adjacent rangeRefers to the distance between the vertexes of the adjacent anode pole pieces and the vertexes of the cathode pole pieces, and the strip patterns are similarThe rectangular structure is similar to a rectangular structure, the distance between the two long-side vertexes of the strip pattern is taken as the adjacent extremely poor, the long-side vertexes are only two vertexes on the long side of the strip pattern, the long side of the strip pattern is two, and the selection of which long side depends on the label image in the training of the segmentation network model, for example, when the CT image (shown in figure 2) is marked to obtain a label image, the adjacent anode pole piece and the cathode pole piece form a pole piece group, and each pole piece group forms one strip pattern. As shown in fig. 3 and 5, the top right corner of each anode pole piece (i.e., the anode pole piece faces the top of the corresponding cathode pole piece, where the anode pole piece and the cathode pole piece are pole pieces in the same pole piece group) is marked, as shown in fig. 4 and 5, the top right corner of each cathode pole piece (i.e., the cathode pole piece faces the top of the corresponding anode pole piece, where the anode pole piece and the cathode pole piece are pole pieces in the same pole piece group) is marked, and a bar pattern (such as a gray part in fig. 5) in the label image is formed based on the two top corners and the anode pole piece, so as to obtain the label image (such as shown in fig. 6), and fig. 9 is an image formed by overlapping the label image and the CT image. In the labeling process, when two labeled vertexes are positioned at the long edge of the upper side of the bar pattern in the label image, and the adjacent range is determined, the distance between the two vertexes of the upper side of the bar pattern in the segmented image is also taken as the adjacent range。
As shown in FIG. 5, the anode electrode is poorMeans the maximum distance of the top points of all anode pole pieces along the length direction of the pole pieces, the cathode pole difference +.>Refers to the maximum distance of the top points of all cathode pole pieces along the length direction of the pole pieces. After labeling the vertexes of the polar plates, two rows of vertexes can be obtained, wherein the vertex at the left side is a cathode pole, and the vertex at the right side is an anode pole. The cathodes are obtained from the positions of all the cathode poles, i.e. the vertices to the left of the bar patternVery bad->The method comprises the steps of carrying out a first treatment on the surface of the The anode difference is obtained from the positions of all anode poles (i.e., the apexes on the right side of the bar pattern)>。
Based on the trained segmentation network model, the CT image (shown in fig. 2) can be input into the trained segmentation network model and then the segmentation image (shown in fig. 8) is output, and the segmentation image (shown in fig. 8) is close to the labeling image (shown in fig. 6). As shown in fig. 5 and 8, the positions of the cathode and anode points can be determined from the segmented image, as shown in fig. 3 and 4. After the coordinates were obtained, the adjacent polar difference, the cathode polar difference and the anode polar difference were calculated, and specific data are shown in table 1 below.
TABLE 1 positions of anode and cathode poles, adjacent pole difference, anode pole difference, cathode pole difference
As shown in fig. 7, the trained split network model adopts a U-net network structure, which specifically includes: an encoder and a decoder. The encoder includes: the system comprises a plurality of downsampling convolution modules and a plurality of channel space attention modules, wherein each downsampling convolution module is followed by a corresponding channel space attention module. The decoder includes: the device comprises a plurality of up-sampling convolution modules and an output module, wherein the number of the up-sampling convolution modules is the same as that of the down-sampling convolution modules. The first downsampling convolution module forms jump connection with the output module, the second downsampling convolution module forms jump connection with the first-to-last upsampling convolution module, and so on, and the first-to-last downsampling convolution module forms jump connection with the second upsampling convolution module.
The channel space attention module is combined to the U-net model, small calculated amount and parameter amount are brought by the channel space attention module, and the segmentation performance of the U-net model is greatly improved, so that a CT image is segmented to obtain a segmented image, and the alignment degree of the pole pieces is obtained through a strip-shaped pattern in the segmented image.
The intelligent recognition method of the battery CT image pole piece alignment degree based on network learning can be applied to CT assembly line operation, a scanning mode and a feeding and discharging mode are set for a CT machine, a corner is scanned for the same battery, a region of interest is selected, namely a specific layer is selected. After the area is selected, the image is adjusted to be in the horizontal direction according to the image direction, and a specific area is selected, and in the present example, the size of the area of interest is 522×819 pixels, as shown in fig. 2. After the region of interest is selected, the pole piece is manually segmented for 10 sample images, as shown in fig. 6.
The training of the segmentation network model is to make the training of the segmentation network model segment the input CT image and output a segmented image, wherein the segmented image is close to the labeling image. The trained segmentation network model is obtained by training the following steps:
and step A100, acquiring a CT image of the battery, and manually dividing the CT image based on the vertexes of the cathode pole piece and the corresponding anode pole piece in the CT image to obtain a label image.
And step A200, inputting the CT image into a segmentation network model to obtain a predicted image.
And step A300, comparing the predicted image with the label image, and updating parameters of the segmentation network model to obtain a trained segmentation network model.
Specifically, after a CT image of a battery is acquired, for each pole piece group, determining the vertex of the anode pole piece facing the side where the cathode pole piece is located in the pole piece group and the vertex of the cathode pole piece facing the side where the anode pole piece is located, and dividing the CT image according to the two vertices to obtain a label image. The label image is provided with a plurality of strip patterns, and the vertexes of the anode pole piece and the vertexes of the corresponding cathode pole piece are respectively positioned at the vertexes of two long sides of the strip patterns.
As shown in fig. 6, the label image may be a binarized image, for example, the pixel corresponding to the bar pattern is 1, and the pixels other than the bar pattern in the divided image are 0. The width of the stripe pattern may be the width of the pole piece. Training the segmentation network model according to the output image and the label image output by the segmentation network model, and continuously inputting the CT image into the segmentation network model to obtain an output image until the training condition is reached, so as to obtain the trained segmentation network model. The label image can be obtained by adopting a manual segmentation mode.
Step a100 includes:
and step A110, determining a plurality of pole piece groups, marking the vertexes of the anode pole pieces in the pole piece groups, and marking the vertexes of the cathode pole pieces in the pole piece groups.
And step A120, determining the pole pieces extending from the inner edges of the pole piece groups, and determining the strip-shaped patterns according to the pole pieces extending from the inner edges of the pole piece groups and the positions of the two vertexes so as to obtain the label image.
Specifically, as shown in fig. 3 to 6, there are a plurality of stripe patterns in the label image, the stripe patterns are in a (approximately) quadrilateral shape, in order to determine the stripe patterns, two vertexes of two pole pieces in the pole piece group are determined first, the two vertexes are located at two ends of the extending edge, and the extending edge is not completely straight and may be curved due to the fact that the pole pieces may be deformed under pressure, so that the stripe patterns need to be determined according to the pole pieces extending from the edge and the positions of the two vertexes. The vertexes of the two pole pieces in the pole piece group, namely the vertex of the anode pole piece facing the corresponding cathode pole piece and the vertex of the corresponding cathode pole piece facing the anode pole piece, are respectively positioned at the two vertexes on the same side of the strip pattern.
When the CT image is a positive and negative balanced sample, the loss function of the segmentation network model is as follows:
;
when the CT image is a positive and negative unbalanced sample, the loss function of the segmentation network model is as follows:
;
;
;
;
wherein,representing a loss function->Representing a binary cross entropy loss function,representing aggregate disparity loss function, +.>Representing the number of CT images +.>Represent the firstiPersonal label image->Representing a logarithmic function>Representing Sigmoid function->Represent the firstiAnd predicting the image.
The imbalance of positive and negative samples causes problems: most of the training processes are simple and easily-separated negative samples (samples belonging to the background), so that the training processes cannot fully learn information belonging to the classified samples; secondly, the simple and easily-divided negative samples are too many, and the effect of other types of samples can be covered (the simple and easily-divided negative samples still generate loss with a certain amplitude, and the large number of the simple and easily-divided negative samples can mainly contribute to the loss, so that the gradient updating direction is dominant, important information is covered), and therefore, different loss functions are used for training the segmentation network model.
During training, partial network parameters are selected: the number of data set training epoch=100; batch size=1; network optimization algorithm selects RMSProp, learning rate=1e -5 Weight decay weight_decay=1e -8 Momentum=0.999.
The number of the downsampling modules is four, and the downsampling convolution module comprises: two first convolution layers and a pooling layer; the channel space attention module has four, the upsampling convolution module includes: two first convolution layers and one deconvolution layer.
Step A200 specifically includes:
step A201, inputting the CT image into a first downsampling convolution module to obtain a first convolution image.
Step A202, inputting the first convolution image into a first channel space attention module to obtain a first attention image.
And step A203, inputting the first attention image into a second downsampling convolution module to obtain a second convolution image.
And step A204, inputting the second convolution image into a second channel space attention module to obtain a second attention image.
And step A205, inputting the second attention image into a third downsampling convolution module to obtain a third convolution image.
And step A206, inputting the third convolution image into a third channel space attention module to obtain a third attention image.
And step A207, inputting the third attention image into a fourth downsampling convolution module to obtain a fourth convolution image.
And step A208, inputting the fourth convolution image into a fourth channel space attention module to obtain a fourth attention image.
Step a209, inputting the fourth attention image into a first up-sampling convolution module to obtain a fifth convolution image.
And step A210, inputting the fifth convolution image into a second up-sampling convolution module to obtain a sixth convolution image.
And step A211, inputting the sixth convolution image into a third up-sampling convolution module to obtain a seventh convolution image.
And step A212, inputting the seventh convolution image into a fourth up-sampling convolution module to obtain an eighth convolution image.
And step A213, inputting the eighth convolution image into an output module to obtain a predicted image.
Specifically, the CT image is input into a downsampling convolution module to obtain a convolution image, the convolution image is processed through a channel space attention module to obtain an attention image, and the next downsampling convolution module is continuously input until the attention image is obtained through the last channel space attention module. The attention image output by the encoder is input into an up-sampling convolution module to obtain a convolution image, and is input into a next up-sampling convolution module continuously until the convolution image is obtained through a last up-sampling convolution module, and then is input into an output module to obtain a prediction image.
The channel spatial attention module includes: a channel attention layer and a spatial attention layer.
Step A202 specifically includes:
step a2021, inputting the first convolution image into a channel attention layer, so as to obtain a channel attention image.
Step a2022, determining a channel image from the first convolution image and the channel attention image.
Step A2023, inputting the channel image into a spatial attention layer to obtain a spatial attention image.
Step a2024, obtaining a first attention image according to the channel image and the spatial attention image.
Specifically, when the convolution image is input into the channel space attention module to output an attention image, the convolution image is input into the channel attention layer to obtain the channel attention image; obtaining a channel image according to the channel attention image and the convolution image (the channel attention image and the convolution image are multiplied to obtain the channel image); inputting the channel image into a spatial attention layer to obtain a spatial attention image; and finally, obtaining a space image according to the channel image and the space attention image (the space image is obtained by multiplying the channel image and the space attention image), and taking the space image as the attention image output by the channel space attention module.
;
;
Wherein,represents the channel attention layer, +.>Representing the activation function Sigmoid->A convolution image is represented and,representing the average pooling layer, ">Representing a multi-layer sensor->Representing the maximum pooling layer,>、/>all represent weights, ++>Representing the average feature map obtained by the channel average pooling layer +.>The maximum feature map obtained by the channel maximum pooling layer is shown.
;
;
Wherein,representing the spatial attention layer, +.>Representing the activation function Sigmoid->A channel image is represented and is displayed,representing the average pooling layer, ">Representing the maximum pooling layer,>representation->Convolution operation of->Representing the average feature map obtained by spatially averaging the pooling layer +.>The maximum feature map obtained by the spatial maximization pooling layer is shown.
Step a204 specifically includes:
and step A2041, inputting the second convolution image into a channel attention layer to obtain a channel attention image.
Step a2042, determining a channel image from the second convolution image and the channel attention image.
And step A2043, inputting the channel image into a spatial attention layer to obtain a spatial attention image.
And step A2044, obtaining a second attention image according to the channel image and the spatial attention image.
Specifically, the specific processing procedures of step a204, step a206, and step a208 refer to step a202.
The output module includes: two first convolution layers and one second convolution layer; the first convolution layer is a3 x 3 convolution operation and the second convolution layer is a1 x 1 convolution operation.
The invention further provides an embodiment of a battery CT image pole piece alignment intelligent recognition system based on network learning, which is based on the battery CT image pole piece alignment intelligent recognition method based on network learning.
The invention relates to an intelligent recognition system for battery CT image pole piece alignment degree based on network learning, which comprises the following components:
the acquisition module is used for acquiring CT images of the battery to be identified; wherein, wait to discern the battery includes: a plurality of anode pole pieces and a plurality of cathode pole pieces which are alternately arranged in sequence;
the segmentation module is used for inputting the CT image into a trained segmentation network model to obtain a segmentation image; wherein, a plurality of strip patterns exist in the segmented image, and the vertexes of the anode pole piece and the vertexes of the corresponding cathode pole piece are respectively positioned at the vertexes of two long sides of the strip patterns; the trained segmentation network model comprises: an encoder and decoder, the encoder comprising: a downsampling convolution module and a channel spatial attention module, the decoder comprising: an up-sampling convolution module and an output module.
The invention relates to an intelligent recognition system for battery CT image pole piece alignment based on network learning, which further comprises:
the post-processing module is used for extracting the positions of the cathode poles and the positions of the anode poles of the bar patterns in the segmented image; determining adjacent polar differences of the strip patterns according to the positions of the cathode poles and the anode poles of the strip patterns; determining anode pole differences of all strip patterns according to the positions of anode poles in the strip patterns; the cathode pole difference of the bar patterns is determined for the locations of the cathode poles in all bar patterns.
Based on the intelligent recognition method of the battery CT image pole piece alignment degree based on the network learning in any embodiment, the invention also provides an embodiment of computer equipment:
the computer equipment comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the intelligent recognition method for the battery CT image pole piece alignment degree based on the network learning according to any one of the embodiments when executing the computer program.
The invention further provides an embodiment of a computer readable storage medium based on the intelligent recognition method of the battery CT image pole piece alignment degree based on the network learning.
The computer readable storage medium of the present invention stores a computer program thereon, which when executed by a processor, implements the steps of the intelligent recognition method for battery CT image pole piece alignment based on network learning according to any one of the embodiments.
It is to be understood that the invention is not limited in its application to the examples described above, but is capable of modification and variation in light of the above teachings by those skilled in the art, and that all such modifications and variations are intended to be included within the scope of the appended claims.