CN117710376A - Tab defect detection method and device and electronic equipment - Google Patents
Tab defect detection method and device and electronic equipment Download PDFInfo
- Publication number
- CN117710376A CN117710376A CN202410166156.0A CN202410166156A CN117710376A CN 117710376 A CN117710376 A CN 117710376A CN 202410166156 A CN202410166156 A CN 202410166156A CN 117710376 A CN117710376 A CN 117710376A
- Authority
- CN
- China
- Prior art keywords
- image
- tab
- classification label
- classification
- sample set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 63
- 230000007547 defect Effects 0.000 title claims abstract description 37
- 238000012549 training Methods 0.000 claims abstract description 82
- 238000013145 classification model Methods 0.000 claims abstract description 57
- 238000000034 method Methods 0.000 claims abstract description 27
- 238000005498 polishing Methods 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims description 43
- 238000005070 sampling Methods 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 13
- 230000003044 adaptive effect Effects 0.000 claims description 8
- 238000009499 grossing Methods 0.000 claims description 5
- 238000002372 labelling Methods 0.000 claims description 5
- 238000005520 cutting process Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 abstract description 25
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 abstract description 8
- 229910052744 lithium Inorganic materials 0.000 abstract description 8
- 238000004519 manufacturing process Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 238000004590 computer program Methods 0.000 description 7
- 238000003860 storage Methods 0.000 description 6
- 210000002569 neuron Anatomy 0.000 description 5
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000035945 sensitivity Effects 0.000 description 4
- 238000000137 annealing Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000007306 turnover Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02E—REDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
- Y02E60/00—Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
- Y02E60/10—Energy storage using batteries
Landscapes
- Image Analysis (AREA)
Abstract
The application discloses a tab defect detection method, a device and electronic equipment, wherein the method comprises the following steps: acquiring an image of a tab to be detected; inputting the tab image to be detected into an image classification model to obtain a detection result of the tab image to be detected, wherein the image classification model is obtained by training a sample set, the sample set comprises tab images marked as a plurality of classification labels, and the plurality of classification labels comprise normal, uneven polishing, shaking, excessive darkness, overexposure, blurring, overlarge tail, prism dirt and left supervision; the detection result is at least one of the plurality of classification labels. The accuracy of the detection of the lug image to be detected can be improved, and the problem of lug imaging is distinguished in advance, so that a lithium battery production line engineering engineer is assisted to adjust imaging equipment in time.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for detecting a tab defect, and an electronic device.
Background
Because of the problems of thin tab thickness, serious reflection and the like, the tab imaging system is very complex, and the imaging is easy to cause problems, so that the effect of the follow-up tab defect detection algorithm is greatly reduced. Currently, a way is needed to pre-identify the problems of tab imaging to assist the lithium battery line engineering in adjusting the imaging equipment in time.
Disclosure of Invention
The application provides a tab defect detection method, a tab defect detection device and electronic equipment, which can be used for pre-judging the problems of tab imaging so as to assist a lithium battery production line engineer to adjust imaging equipment in time.
In a first aspect, the present application provides a tab defect detection method, including:
acquiring an image of a tab to be detected;
inputting the tab image to be detected into an image classification model to obtain a detection result of the tab image to be detected, wherein the image classification model is obtained by training a sample set, and the sample set comprises tab images marked as a plurality of classification labels;
the plurality of classification labels comprise normal, uneven lighting, dithering, darkness, overexposure, blurring, oversized tail, prism smudging and left supervision;
the detection result is at least one of the plurality of classification labels.
In this embodiment, the tab image to be detected is input to the pre-trained image classification model, the classification model detects the tab image to be detected to obtain a detection result, so that the accuracy of detecting the tab image to be detected can be improved, and the problem of tab imaging is pre-determined, so as to assist a lithium battery line operator in timely adjusting the imaging device.
In an embodiment of the present application, before the inputting the tab image to be measured to the image classification model to obtain a detection result of the tab image to be measured, the method further includes:
acquiring a plurality of first tab images, wherein the plurality of first tab images are marked as a plurality of classification labels;
performing image processing matched with a target classification label on a first tab image in the target classification label to obtain a second tab image belonging to the target classification label, wherein the target classification label is any classification label in the plurality of classification labels;
adding the second ear image to a sample set, and labeling the second ear image as the target classification label;
and training the initial network model by adopting the sample set to obtain an image classification model.
In this embodiment, based on a small number of first tab images, a large number of second tab images belonging to the same classification can be obtained, and then a sample set formed by the second tab images is used to train the initial network model, so that the classification accuracy of the image classification model can be improved.
In an embodiment of the present application, performing image processing on the first tab image in the target classification tag and matching the target classification tag to obtain a second tab image belonging to the target classification tag includes:
And downsampling a third tab image in the target classification label to obtain a second tab image belonging to the target classification label, wherein the third tab image is any one of the first tab images in the target classification label.
In this embodiment, a new tab image (i.e., the second tab image) can be obtained by downsampling the third tab image, and the new tab image has the target classification label, so that the number of samples in the sample set can be expanded in the above manner.
In an embodiment of the present application, the downsampling the third ear image in the target classification tag to obtain a second ear image belonging to the target classification tag includes:
downsampling a third ear image in the target classification label to obtain a sampling image;
randomly cutting the sampling image to obtain a sub-image;
filling the sub-images to obtain a first image, wherein the size of the first image is the same as that of the sampling image;
and carrying out matching processing on the first image and the target classification label to obtain a second ear image belonging to the target classification label.
In this embodiment, the sampled image may be randomly cut multiple times to obtain multiple sub-images, so as to obtain multiple second ear images, and realize expansion of the sample images in the sample set, so as to improve classification accuracy of the image classification model.
In an embodiment of the present application, the processing for matching the first image with the target classification tag to obtain a second ear image belonging to the target classification tag includes:
and if the target classification label is a first classification label, adjusting the local brightness degree of the first image to obtain a second ear image belonging to the first classification label, wherein the first classification label is used for indicating uneven polishing.
In this embodiment, when the first classification label is uneven polishing, the local brightness of the first image is adjusted to obtain the second tab image belonging to the first classification label.
In an embodiment of the present application, the processing for matching the target classification label with the first image to obtain the second ear image includes:
And if the target classification label is a second classification label, performing Gaussian smoothing on the first image to obtain a second ear image belonging to the second classification label, wherein the second classification label is used for indicating darkness, overexposure or blurring.
In this embodiment, the tab image marked as too dark, overexposure or blurring in the sample set may be expanded.
In an embodiment of the present application, the processing for matching the target classification label with the first image to obtain the second ear image includes:
and if the target classification label is a third classification label, gaussian noise is added to the first image to obtain a second ear image belonging to the third classification label, wherein the third classification label is used for indicating that the prism is dirty.
In this embodiment, the tab image marked as prism dirt in the sample set may be expanded.
In an embodiment of the present application, the initial network model includes a first optimizer and a second optimizer, where the first optimizer adopts an adaptive moment estimation algorithm, and the second optimizer adopts a random gradient descent algorithm;
in the process of training the initial network model by adopting the sample set, if the training times are smaller than or equal to a first time threshold value, the initial network model adopts the first optimizer;
And if the training times are greater than the first time threshold, the initial network model adopts the second optimizer.
In this embodiment, in the early stage of model training, the first optimizer is used to perform rapid convergence, in the later stage of model training, the data is often more stable, the gradient information is more reliable, and the second optimizer is used to perform fine tuning, so that the target function can jump and gradually converge at the minimum value.
In an embodiment of the present application, training the initial network model by using the sample set to obtain an image classification model includes:
in each round of training on the initial network model, a preset number of second lug images are randomly selected from the sample set to train the initial network model, and the image classification model is obtained, wherein the preset number is the product of the number of second lug images in the sample set and a preset value.
In this embodiment, in each round of training, the second ear image is randomly selected from the sample set seed to train the initial network model, so that more randomness is introduced, which is helpful for the model to better explore different sample combinations in the training process, and improves the generalization capability of the model. In addition, by randomly selecting samples in the sample set, the sensitivity of the model to a specific batch or training sample sequence can be reduced, and the phenomenon of over fitting is avoided.
In an embodiment of the present application, a placement direction of the tab head and the tab tail in each tab image in the sample set is the same.
In this embodiment, the placement directions of the tab head and the tab tail in each tab image are set to be the same, for example, the tab head is located on the left side of the tab image, the tab tail is located on the right side of the tab image, and when the model is trained by using the sample set, the placement directions of the tab in each tab image are consistent, so that the operation of adjusting the placement directions of the tab in the tab image can be omitted, and the model training efficiency can be improved.
In a second aspect, an embodiment of the present application provides a tab defect detection device, including:
the first acquisition module is used for acquiring an image of the tab to be detected;
the second acquisition module is used for inputting the tab image to be detected into an image classification model to obtain a detection result of the tab image to be detected, wherein the image classification model is obtained by training a sample set, and the sample set comprises tab images marked as a plurality of classification labels;
the plurality of classification labels comprise normal, uneven lighting, dithering, darkness, overexposure, blurring, oversized tail, prism smudging and left supervision;
The detection result is at least one of the plurality of classification labels.
In this embodiment, the tab image to be detected is input to the pre-trained image classification model, the classification model detects the tab image to be detected to obtain a detection result, so that the accuracy of detecting the tab image to be detected can be improved, and the problem of tab imaging is pre-determined, so as to assist a lithium battery line operator in timely adjusting the imaging device.
In an embodiment of the present application, the apparatus further includes:
the third acquisition module is used for acquiring a plurality of first tab images, and the plurality of first tab images are marked as a plurality of classification labels;
the processing module is used for carrying out image processing on the first tab image in the target classification label and matching with the target classification label to obtain a second tab image belonging to the target classification label, wherein the target classification label is any classification label in the plurality of classification labels;
the adding module is used for adding the second lug image to a sample set and labeling the second lug image as the target classification label;
and the training module is used for training the initial network model by adopting the sample set to obtain an image classification model.
In this embodiment, based on a small number of first tab images, a large number of second tab images belonging to the same classification can be obtained, and then a sample set formed by the second tab images is used to train the initial network model, so that the classification accuracy of the image classification model can be improved.
In an embodiment of the present application, the processing module includes:
and the processing sub-module is used for downsampling a third tab image in the target classification label to obtain a second tab image belonging to the target classification label, wherein the third tab image is any first tab image in the target classification label.
In this embodiment, a new tab image (i.e., the second tab image) can be obtained by downsampling the third tab image, and the new tab image has the target classification label, so that the number of samples in the sample set can be expanded in the above manner.
In an embodiment of the present application, the processing sub-module includes:
the sampling unit is used for downsampling the third ear image in the target classification label to obtain a sampling image;
the clipping unit is used for randomly clipping the sampling image to obtain a sub-image;
The filling unit is used for filling the sub-images to obtain a first image, and the size of the first image is the same as that of the sampling image;
and the processing unit is used for carrying out matching processing on the first image and the target classification label to obtain a second lug image belonging to the target classification label.
In this embodiment, the sampled image may be randomly cut multiple times to obtain multiple sub-images, so as to obtain multiple second ear images, and realize expansion of the sample images in the sample set, so as to improve classification accuracy of the image classification model.
In an embodiment of the present application, the processing unit is specifically configured to:
and if the target classification label is a first classification label, adjusting the local brightness degree of the first image to obtain a second ear image belonging to the first classification label, wherein the first classification label is used for indicating uneven polishing.
In this embodiment, when the first classification label is uneven polishing, the local brightness of the first image is adjusted to obtain the second tab image belonging to the first classification label.
In an embodiment of the present application, the processing unit is specifically configured to:
and if the target classification label is a second classification label, performing Gaussian smoothing on the first image to obtain a second ear image belonging to the second classification label, wherein the second classification label is used for indicating darkness, overexposure or blurring.
In this embodiment, the tab image marked as too dark, overexposure or blurring in the sample set may be expanded.
In an embodiment of the present application, the processing unit is specifically configured to:
and if the target classification label is a third classification label, gaussian noise is added to the first image to obtain a second ear image belonging to the third classification label, wherein the third classification label is used for indicating that the prism is dirty.
In this embodiment, the tab image marked as prism dirt in the sample set may be expanded.
In an embodiment of the present application, the initial network model includes a first optimizer and a second optimizer, where the first optimizer adopts an adaptive moment estimation algorithm, and the second optimizer adopts a random gradient descent algorithm;
in the process of training the initial network model by adopting the sample set, if the training times are smaller than or equal to a first time threshold value, the initial network model adopts the first optimizer;
And if the training times are greater than the first time threshold, the initial network model adopts the second optimizer.
In this embodiment, in the early stage of model training, the first optimizer is used to perform rapid convergence, in the later stage of model training, the data is often more stable, the gradient information is more reliable, and the second optimizer is used to perform fine tuning, so that the target function can jump and gradually converge at the minimum value.
In an embodiment of the present application, the training module is configured to randomly select a preset number of second ear images from the sample set to train the initial network model in each training of the initial network model, so as to obtain the image classification model, where the preset number is a product of the number of second ear images in the sample set and a preset value.
In this embodiment, in each round of training, the second ear image is randomly selected from the sample set seed to train the initial network model, so that more randomness is introduced, which is helpful for the model to better explore different sample combinations in the training process, and improves the generalization capability of the model. In addition, by randomly selecting samples in the sample set, the sensitivity of the model to a specific batch or training sample sequence can be reduced, and the phenomenon of over fitting is avoided.
In an embodiment of the present application, a placement direction of the tab head and the tab tail in each tab image in the sample set is the same.
In this embodiment, the placement directions of the tab head and the tab tail in each tab image are set to be the same, for example, the tab head is located on the left side of the tab image, the tab tail is located on the right side of the tab image, and when the model is trained by using the sample set, the placement directions of the tab in each tab image are consistent, so that the operation of adjusting the placement directions of the tab in the tab image can be omitted, and the model training efficiency can be improved.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or an instruction stored in the memory and capable of running on the processor, where the program or the instruction is executed by the processor to implement the steps of the tab defect detection method according to the first aspect.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
Features, advantages, and technical effects of exemplary embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a tab defect detection method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a tab defect detection method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a tab defect detecting device according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, based on the embodiments herein, which would be apparent to one of ordinary skill in the art without making any inventive effort, are intended to be within the scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description and claims of the present application and in the description of the figures above are intended to cover non-exclusive inclusions. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order.
Fig. 1 is a flow chart of a tab defect detection method according to an embodiment of the present application, as shown in fig. 1, the method includes steps 101 to 102, where:
and step 101, acquiring an image of the tab to be detected. The tab image to be measured may refer to an image obtained by photographing the tab through an imaging device.
Step 102, inputting the tab image to be detected to an image classification model to obtain a detection result of the tab image to be detected, wherein the image classification model is obtained by training a sample set, the sample set comprises tab images marked as a plurality of classification labels, the plurality of classification labels comprise normal, uneven polishing, shaking, darkness, overexposure, blurring, overlarge tail, prism dirt and left supervisual field, and the detection result is at least one of the plurality of classification labels.
When the lug is shot, the lug head is positioned at the left side of the lug image, and the lug tail is positioned at the right side of the lug image. The left superview is that the left side of the tab is not completely displayed in the image when the tab is photographed, and the left side display range of the image is exceeded. The overlarge tail part refers to the overlarge distance between the upper tab and the lower tab at the head part of the tab. The prism dirt refers to dirt on a prism of an imaging device for shooting the tab, so that the shot image has defects.
In the above, the image classification model is obtained by training the initial network model by using a sample set in advance, wherein the sample set comprises a plurality of tab images, the tab images are marked with classification labels, and each tab image is marked with a classification label. Illustratively, the plurality of category labels may include normal, uneven lighting, dithering, excessive darkness, overexposure, blurring, excessive tail, prism smudging, left overscan, and the like. The image classification model can detect the input tab image to be detected to obtain a detection result. The detection result may be at least one of normal, uneven lighting, jittering, excessive darkness, overexposure, blurring, excessive tail, prism smudging, and left supervision. According to the method, multiple classification labels are set according to the frequently-occurring problems in the tab imaging, and when an image classification model is used for detecting tab images to be detected, the fineness of detection results can be improved, so that the classification of the tab images to be detected is more accurate.
In this embodiment, the tab image to be detected is input to the pre-trained image classification model, the classification model detects the tab image to be detected to obtain a detection result, so that the accuracy of detecting the tab image to be detected can be improved, and the problem of tab imaging is pre-determined, so as to assist a lithium battery line operator in timely adjusting the imaging device.
Fig. 2 is a flowchart of a tab defect detection method according to an embodiment of the present application, as shown in fig. 2, where the method includes steps 201 to 206, where:
step 201, acquiring a plurality of first tab images, wherein the plurality of first tab images are marked as a plurality of classification labels, and each first tab image is marked as at least one classification label;
step 202, performing image processing matched with a target classification label on a first tab image in the target classification label to obtain a second tab image belonging to the target classification label, wherein the target classification label is any classification label in the plurality of classification labels;
step 203, adding the second ear image to a sample set, and labeling the second ear image as the target classification label;
and 204, training an initial network model by adopting the sample set to obtain an image classification model, wherein the image classification model is used for determining a classification label to which an image input into the image classification model belongs.
Step 205, acquiring an image of the tab to be measured. The tab image to be detected can be an image obtained by shooting the tab through imaging equipment;
Step 206, inputting the tab image to be detected to an image classification model to obtain a detection result of the tab image to be detected, wherein the image classification model is obtained by training a sample set, the sample set comprises tab images marked as a plurality of classification labels, and the detection result comprises at least one classification indicated by the classification labels.
Specifically, the more the number of training samples is, the better the training effect of the image classification model is, the higher the detection accuracy of the image classification model is, in the actual training sample collection, the fewer the number of the training samples is caused by the fact that the number of the images with quality problems in the tab imaging is small, and the more the sample images with a large number can be obtained based on a small number of tab images.
And acquiring a plurality of first tab images, wherein each first tab image is marked with a classification label, and each classification label is provided with at least one corresponding first tab image. And performing image processing matched with the classification label of the first tab image based on each first tab image to obtain a second tab image, wherein the second tab image has the same classification label as the first tab image. For example, for a first tab image labeled "uneven lighting", an image processing matching the "uneven lighting" is performed on the first tab image, and a second tab image is obtained, which is also labeled "uneven lighting".
The initial network model may employ an EfficientNet model.
And the placement directions of the head part and the tail part of each tab in the tab image in the sample set are the same. For example, the tab head is located on the left side of the tab image, the tab tail is located on the right side of the tab image, when the model training is performed by adopting the sample set, the placement directions of the tabs in each tab image are consistent, and the operation of adjusting the placement directions of the tabs in the tab image can be omitted, so that the model training efficiency is improved.
In the above, based on a small number of first tab images, a large number of second tab images belonging to the same classification can be obtained, and then the initial network model is trained by adopting a sample set formed by the second tab images, so that the classification accuracy of the image classification model can be improved.
In still another embodiment of the present application, the performing image processing on the first tab image in the target classification tag to match the target classification tag to obtain a second tab image belonging to the target classification tag includes:
and downsampling a third tab image in the target classification label to obtain a second tab image belonging to the target classification label, wherein the third tab image is any one of the first tab images in the target classification label. In the above, by downsampling the third tab image, a new tab image (i.e., the second tab image) may be obtained, where the new tab image has the target classification label, and by this way, the number of samples in the sample set may be expanded.
For the third ear image marked with two classification labels of 'tail oversized' and 'left superview', the image obtained by downsampling the third ear image can be directly used as the second ear image to be added into the sample set. In addition, for the third ear image labeled as another classification label, the image obtained by downsampling the third ear image may be directly added as the second ear image to the sample set, or may be processed as follows:
illustratively, the downsampling the third ear image in the target classification tag to obtain a second ear image belonging to the target classification tag includes:
downsampling a third ear image in the target classification label to obtain a sampling image;
randomly cutting the sampling image to obtain a sub-image;
filling the sub-images to obtain a first image, wherein the size of the first image is the same as that of the sampling image;
and carrying out matching processing on the first image and the target classification label to obtain a second ear image belonging to the target classification label.
For example, downsampling the third ear image to obtain a 512×512 sampled image, then randomly intercepting the 400×400 region in the sampled image to obtain a sub-image, and filling pixels around the 400×400 region with the pixel value 0 until the filling is 512×512, so as to obtain the first image.
In the above, the sampling image can be randomly cut for multiple times to obtain multiple sub-images, so as to obtain multiple second lug images, and expansion of the sample images in the sample set is realized, so that classification accuracy of the image classification model is improved.
For different target classification labels, different processing procedures can be correspondingly performed, for example, if the target classification label is a first classification label, the local brightness of the first image is adjusted to obtain a second ear image belonging to the first classification label, and the first classification label is used for indicating uneven polishing.
In this embodiment, when the first classification label is "uneven lighting", the local brightness of the first image is adjusted to obtain a second ear image having the classification label of "uneven lighting". It should be noted that, each time the local dimming degree adjustment is performed on a different first image, a different local dimming degree adjustment manner may be adopted, for example, when the dimming degree adjustment is performed on the central area of the first image a; when the first image b is adjusted, the brightness of the upper left area of the first image b is adjusted in different ways, so that the randomness of the local brightness can be increased, the samples in the sample set can meet more different scenes, and the generalization capability of the image classification model is improved.
In the above, when the first classification label is uneven polishing, the local brightness of the first image is adjusted to obtain the second tab image belonging to the first classification label.
For example, if the target classification label is a second classification label, gaussian smoothing is performed on the first image to obtain a second ear image belonging to the second classification label, where the second classification label is used to indicate darkness, overexposure or blurring. Through the mode, the tab images marked as excessively dark, overexposure or blurring in the sample set can be expanded.
Illustratively, if the target classification label is a third classification label, gaussian noise is added to the first image to obtain a second ear image belonging to the third classification label, where the third classification label is used to indicate prism smudge. Through the mode, the lug image marked as the dirt of the prism in the sample set can be expanded.
For example, if the target classification label is a fourth classification label, a kernel function is used to process the first image to obtain a second ear image belonging to the fourth classification label, where the fourth classification label is used to indicate dithering. Through the mode, the lug image marked as shake in the sample set can be expanded.
In yet another embodiment of the present application, the initial network model includes a first optimizer that employs an adaptive moment estimation algorithm and a second optimizer that employs a random gradient descent algorithm;
in the process of training the initial network model by adopting the sample set, if the training times are smaller than or equal to a first time threshold value, the initial network model adopts the first optimizer;
and if the training times are greater than the first time threshold, the initial network model adopts the second optimizer.
Illustratively, the first optimizer may be an adaptive moment estimation algorithm (Adaptive Moment Estimation, adam) optimizer that is adaptive and may adaptively update the learning rate. The second optimizer may be a random gradient descent algorithm (Stochastic Gradient Descent, SGD) optimizer, and the first-order number threshold may be set according to practical situations, and is not limited herein, for example, the total training time is 200 times, and the first-order number threshold may be set to 150 times.
Compared with the SGD optimizer, the Adam optimizer can converge faster and is not easy to sink into local optimum, the learning rate scheduler adopts Cosine Annealing (Cosine Annealing), and the learning rate can be reduced from 0.001 (initial learning rate) to 0 in the form of Cosine function between training times of 0 and 200.
In the early stage of model training (i.e. the training time is less than or equal to the first time threshold), the Adam optimizer is used for rapid convergence, in the later stage of training (i.e. the training time is greater than the first time threshold), the SGD optimizer is used for fine tuning, in the later stage of model training, the data tends to be more stable, the gradient information is more reliable, the SGD may perform better at this time, and because it has a smaller learning rate, it can jump and gradually converge at the minimum of the objective function.
Adam optimizers are very efficient at exploring solution space and converging quickly in the early stages, but may result in excessive exploration in the late stages, thus sinking the model into a locally optimal solution or failing to reach a better globally optimal solution. SGDs may be more conservative, helping the model to search and converge better at a later stage.
In the foregoing, in the early stage of model training, the first optimizer is used to perform rapid convergence, in the later stage of model training, the data is often more stable, the gradient information is more reliable, and the second optimizer is used to perform fine tuning, so that the target function can jump and gradually converge at the minimum value.
In yet another embodiment of the present application, the training the initial network model using the sample set to obtain an image classification model includes:
In each round of training on the initial network model, a preset number of second lug images are randomly selected from the sample set to train the initial network model, and the image classification model is obtained, wherein the preset number is the product of the number of second lug images in the sample set and a preset value.
In the foregoing, the preset value may be set according to practical situations, for example, 80% or 85%, etc., and is not limited herein, and it should be noted that the preset value should fall within a range of 0 to 100%. If the preset value is 80%, in each round of training, the second ear image of 80% is randomly selected from the sample set seed to train the initial network model, more randomness is introduced, and the model is facilitated to better explore different sample combinations in the training process, and the generalization capability of the model is improved. In addition, by randomly selecting samples in the sample set, the sensitivity of the model to a specific batch or training sample sequence can be reduced, and the phenomenon of over fitting is avoided.
The Dropout layer is arranged in the network structure of the initial network model, so that the training data is prevented from being excessively fitted to specific neurons. The basic idea of Dropout is to randomly zero the output of some neurons during the training of the network, thereby reducing the dependency between different neurons, i.e. to "drop" a part of the neurons. The purpose of this is to make the model unable to rely excessively on certain specific neurons, thereby improving the generalization ability of the model.
The model training adopts a Pytorch framework, the loss function used by the initial network model is a cross entropy loss function, the default learning rate is 0.001, the batch_size is 64 during training, namely, the model is used for each round of training (the number of samples is 200, the optimal model is automatically stored during training, and the optimal model is stored under the current folder.
The image classification model obtained through the process is adopted for detection, the image classification model can obtain the score of each classification, and if the classification with the highest score is used as a detection result, the detection accuracy of each classification is respectively as follows: uneven polishing is 75.00%; jitter-70.97%; excessively dark-93.55%; overexposure-87.88%; blur-97.06%; tail oversized-58.62%; prism smudge-36.00%; normal-85.71%; left superview-100.00%. The overall recognition accuracy was 80.43%.
If the first three classifications with the highest score are used as detection results, the detection accuracy of each classification is respectively: uneven polishing of-94.23%; jitter-80.65%; excessively dark-100.00%; overexposure to 100.00%; blur-100.00%; tail oversized-100.00%; prism smudge-72.00%; normal|99.05%; left superview-100.00%. The overall recognition accuracy was 95.38%.
According to the tab defect detection method, the tab turnover image imaging can be detected, the problem of tab imaging is judged in advance, so that a lithium battery production line worker is assisted in timely adjusting imaging equipment, and the problem of tab imaging is prevented from interfering with the defect detection accuracy when the tab turnover defect detection is carried out on the basis of the acquired tab image.
Referring to fig. 3, a schematic structural diagram of a tab defect detecting device according to an embodiment of the present application is shown in fig. 3, and the tab defect detecting device 300 includes:
the first acquiring module 301 is configured to acquire an image of a tab to be measured;
the second obtaining module 302 is configured to input the tab image to be tested to an image classification model to obtain a detection result of the tab image to be tested, where the image classification model is obtained by training a sample set, the sample set includes tab images labeled as a plurality of classification labels, and the plurality of classification labels include normal, uneven lighting, dithering, excessive darkness, overexposure, blurring, excessive tail, prism dirt, and left supervision; the detection result is at least one of the plurality of classification labels.
In this embodiment, the tab image to be detected is input to the pre-trained image classification model, the classification model detects the tab image to be detected to obtain a detection result, so that the accuracy of detecting the tab image to be detected can be improved, and the problem of tab imaging is pre-determined, so as to assist a lithium battery line operator in timely adjusting the imaging device.
In an embodiment of the present application, the tab defect detection device 300 further includes:
the third acquisition module is used for acquiring a plurality of first tab images, and the plurality of first tab images are marked as a plurality of classification labels;
the processing module is used for carrying out image processing on the first tab image in the target classification label and matching with the target classification label to obtain a second tab image belonging to the target classification label, wherein the target classification label is any classification label in the plurality of classification labels;
the adding module is used for adding the second lug image to a sample set and labeling the second lug image as the target classification label;
and the training module is used for training the initial network model by adopting the sample set to obtain an image classification model.
In this embodiment, based on a small number of first tab images, a large number of second tab images belonging to the same classification can be obtained, and then a sample set formed by the second tab images is used to train the initial network model, so that the classification accuracy of the image classification model can be improved.
In an embodiment of the present application, the processing module includes:
and the processing sub-module is used for downsampling a third tab image in the target classification label to obtain a second tab image belonging to the target classification label, wherein the third tab image is any first tab image in the target classification label.
In this embodiment, a new tab image (i.e., the second tab image) can be obtained by downsampling the third tab image, and the new tab image has the target classification label, so that the number of samples in the sample set can be expanded in the above manner.
In an embodiment of the present application, the processing sub-module includes:
the sampling unit is used for downsampling the third ear image in the target classification label to obtain a sampling image;
the clipping unit is used for randomly clipping the sampling image to obtain a sub-image;
The filling unit is used for filling the sub-images to obtain a first image, and the size of the first image is the same as that of the sampling image;
and the processing unit is used for carrying out matching processing on the first image and the target classification label to obtain a second lug image belonging to the target classification label.
In this embodiment, the sampled image may be randomly cut multiple times to obtain multiple sub-images, so as to obtain multiple second ear images, and realize expansion of the sample images in the sample set, so as to improve classification accuracy of the image classification model.
In an embodiment of the present application, the processing unit is specifically configured to:
and if the target classification label is a first classification label, adjusting the local brightness degree of the first image to obtain a second ear image belonging to the first classification label, wherein the first classification label is used for indicating uneven polishing.
In this embodiment, when the first classification label is uneven polishing, the local brightness of the first image is adjusted to obtain the second tab image belonging to the first classification label.
In an embodiment of the present application, the processing unit is specifically configured to:
and if the target classification label is a second classification label, performing Gaussian smoothing on the first image to obtain a second ear image belonging to the second classification label, wherein the second classification label is used for indicating darkness, overexposure or blurring.
In this embodiment, the tab image marked as too dark, overexposure or blurring in the sample set may be expanded.
In an embodiment of the present application, the processing unit is specifically configured to:
and if the target classification label is a third classification label, gaussian noise is added to the first image to obtain a second ear image belonging to the third classification label, wherein the third classification label is used for indicating that the prism is dirty.
In this embodiment, the tab image marked as prism dirt in the sample set may be expanded.
In an embodiment of the present application, the initial network model includes a first optimizer and a second optimizer, where the first optimizer adopts an adaptive moment estimation algorithm, and the second optimizer adopts a random gradient descent algorithm;
in the process of training the initial network model by adopting the sample set, if the training times are smaller than or equal to a first time threshold value, the initial network model adopts the first optimizer;
And if the training times are greater than the first time threshold, the initial network model adopts the second optimizer.
In this embodiment, in the early stage of model training, the first optimizer is used to perform rapid convergence, in the later stage of model training, the data is often more stable, the gradient information is more reliable, and the second optimizer is used to perform fine tuning, so that the target function can jump and gradually converge at the minimum value.
In an embodiment of the present application, the training module is configured to randomly select a preset number of second ear images from the sample set to train the initial network model in each training of the initial network model, so as to obtain the image classification model, where the preset number is a product of the number of second ear images in the sample set and a preset value.
In this embodiment, in each round of training, the second ear image is randomly selected from the sample set seed to train the initial network model, so that more randomness is introduced, which is helpful for the model to better explore different sample combinations in the training process, and improves the generalization capability of the model. In addition, by randomly selecting samples in the sample set, the sensitivity of the model to a specific batch or training sample sequence can be reduced, and the phenomenon of over fitting is avoided.
Fig. 4 shows a schematic hardware structure of an electronic device according to an embodiment of the present application.
The electronic device may comprise a processor 401 and a memory 402 in which computer program instructions are stored.
In particular, the processor 401 described above may include a central processing unit (Central Processing Unit, CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured to implement one or more integrated circuits of embodiments of the present application.
Memory 402 may include mass storage for data or instructions. By way of example, and not limitation, memory 402 may comprise a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. In some examples, memory 402 may include removable or non-removable (or fixed) media, or memory 402 may be a non-volatile solid state memory. In some embodiments, the memory 402 may be internal or external to the battery device.
In some examples, memory 402 may be Read Only Memory (ROM). In one example, the ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these.
Memory 402 may include Read Only Memory (ROM), random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors) it is operable to perform the operations described in the tab defect detection methods provided by embodiments of the present application.
The processor 401 reads and executes the computer program instructions stored in the memory 402 to implement the tab defect detection method in the embodiment shown in fig. 1, and achieves the corresponding technical effects achieved by executing the method/steps in the embodiment shown in fig. 1, which are not described herein for brevity.
In addition, embodiments of the present application may be implemented by providing a computer storage medium. The computer storage medium has stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the tab defect detection methods of the above embodiments.
It should be clear that the present application is not limited to the particular arrangements and processes described above and illustrated in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions, or change the order between steps, after appreciating the spirit of the present application.
The functional blocks shown in the above block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be different from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices, and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the embodiments, and are intended to be included within the scope of the claims and description. In particular, the technical features mentioned in the respective embodiments may be combined in any manner as long as there is no structural conflict. The present application is not limited to the specific embodiments disclosed herein, but encompasses all technical solutions falling within the scope of the claims.
Claims (12)
1. The tab defect detection method is characterized by comprising the following steps of:
acquiring an image of a tab to be detected;
inputting the tab image to be detected into an image classification model to obtain a detection result of the tab image to be detected, wherein the image classification model is obtained by training a sample set, and the sample set comprises tab images marked as a plurality of classification labels;
The plurality of classification labels comprise normal, uneven lighting, dithering, darkness, overexposure, blurring, oversized tail, prism smudging and left supervision;
the detection result is at least one of the plurality of classification labels.
2. The tab defect detection method according to claim 1, wherein before the inputting the tab image to be detected into the image classification model to obtain the detection result of the tab image to be detected, the method further comprises:
acquiring a plurality of first tab images, wherein the plurality of first tab images are marked as a plurality of classification labels;
performing image processing matched with a target classification label on a first tab image in the target classification label to obtain a second tab image belonging to the target classification label, wherein the target classification label is any classification label in the plurality of classification labels;
adding the second ear image to a sample set, and labeling the second ear image as the target classification label;
and training the initial network model by adopting the sample set to obtain an image classification model.
3. The tab defect detection method according to claim 2, wherein the performing image processing on the first tab image in the target classification tag to match the target classification tag to obtain the second tab image belonging to the target classification tag includes:
And downsampling a third tab image in the target classification label to obtain a second tab image belonging to the target classification label, wherein the third tab image is any one of the first tab images in the target classification label.
4. The tab defect detection method of claim 3, wherein the downsampling the third tab image in the target classification label to obtain a second tab image belonging to the target classification label comprises:
downsampling a third ear image in the target classification label to obtain a sampling image;
randomly cutting the sampling image to obtain a sub-image;
filling the sub-images to obtain a first image, wherein the size of the first image is the same as that of the sampling image;
and carrying out matching processing on the first image and the target classification label to obtain a second ear image belonging to the target classification label.
5. The tab defect detection method of claim 4, wherein the processing the first image to match the target classification label to obtain a second tab image belonging to the target classification label comprises:
And if the target classification label is a first classification label, adjusting the local brightness degree of the first image to obtain a second ear image belonging to the first classification label, wherein the first classification label is used for indicating uneven polishing.
6. The tab defect detection method of claim 4, wherein the processing the first image to match the target classification label to obtain the second tab image comprises:
and if the target classification label is a second classification label, performing Gaussian smoothing on the first image to obtain a second ear image belonging to the second classification label, wherein the second classification label is used for indicating darkness, overexposure or blurring.
7. The tab defect detection method of claim 4, wherein the processing the first image to match the target classification label to obtain the second tab image comprises:
and if the target classification label is a third classification label, gaussian noise is added to the first image to obtain a second ear image belonging to the third classification label, wherein the third classification label is used for indicating that the prism is dirty.
8. The tab defect detection method of claim 2, wherein the initial network model comprises a first optimizer and a second optimizer, the first optimizer employing an adaptive moment estimation algorithm, the second optimizer employing a random gradient descent algorithm;
in the process of training the initial network model by adopting the sample set, if the training times are smaller than or equal to a first time threshold value, the initial network model adopts the first optimizer;
and if the training times are greater than the first time threshold, the initial network model adopts the second optimizer.
9. The method for detecting a tab defect according to claim 2, wherein training the initial network model by using the sample set to obtain an image classification model comprises:
in each round of training on the initial network model, a preset number of second lug images are randomly selected from the sample set to train the initial network model, and the image classification model is obtained, wherein the preset number is the product of the number of second lug images in the sample set and a preset value.
10. The tab defect detection method according to any one of claims 1 to 9, wherein the placement directions of the tab head and tail in each tab image in the sample set are the same.
11. The utility model provides a utmost point ear defect detection device which characterized in that includes:
the first acquisition module is used for acquiring an image of the tab to be detected;
the second acquisition module is used for inputting the tab image to be detected into an image classification model to obtain a detection result of the tab image to be detected, wherein the image classification model is obtained by training a sample set, the sample set comprises tab images marked as a plurality of classification labels, and the detection result comprises at least one classification indicated by the classification labels.
12. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the tab defect detection method of any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410166156.0A CN117710376B (en) | 2024-02-05 | Tab defect detection method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410166156.0A CN117710376B (en) | 2024-02-05 | Tab defect detection method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117710376A true CN117710376A (en) | 2024-03-15 |
CN117710376B CN117710376B (en) | 2024-06-07 |
Family
ID=
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109270071A (en) * | 2018-07-23 | 2019-01-25 | 广州超音速自动化科技股份有限公司 | Coating method for detecting abnormality and tab welding detection system on a kind of tab |
CN109472284A (en) * | 2018-09-18 | 2019-03-15 | 浙江大学 | A kind of battery core defect classification method based on zero sample learning of unbiased insertion |
CN110135521A (en) * | 2019-05-28 | 2019-08-16 | 陕西何止网络科技有限公司 | Pole-piece pole-ear defects detection model, detection method and system based on convolutional neural networks |
CN112241699A (en) * | 2020-10-13 | 2021-01-19 | 无锡先导智能装备股份有限公司 | Object defect category identification method and device, computer equipment and storage medium |
CN114022479A (en) * | 2022-01-05 | 2022-02-08 | 高视科技(苏州)有限公司 | Battery tab appearance defect detection method |
CN115294009A (en) * | 2021-12-17 | 2022-11-04 | 中科芯集成电路有限公司 | Method and equipment for detecting welding defects of battery tabs based on machine learning and storage medium |
EP4266246A1 (en) * | 2022-04-22 | 2023-10-25 | Imec VZW | Automated defect classification and detection |
WO2024021081A1 (en) * | 2022-07-29 | 2024-02-01 | 宁德时代新能源科技股份有限公司 | Method and apparatus for detecting defect on surface of product |
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109270071A (en) * | 2018-07-23 | 2019-01-25 | 广州超音速自动化科技股份有限公司 | Coating method for detecting abnormality and tab welding detection system on a kind of tab |
CN109472284A (en) * | 2018-09-18 | 2019-03-15 | 浙江大学 | A kind of battery core defect classification method based on zero sample learning of unbiased insertion |
CN110135521A (en) * | 2019-05-28 | 2019-08-16 | 陕西何止网络科技有限公司 | Pole-piece pole-ear defects detection model, detection method and system based on convolutional neural networks |
CN112241699A (en) * | 2020-10-13 | 2021-01-19 | 无锡先导智能装备股份有限公司 | Object defect category identification method and device, computer equipment and storage medium |
CN115294009A (en) * | 2021-12-17 | 2022-11-04 | 中科芯集成电路有限公司 | Method and equipment for detecting welding defects of battery tabs based on machine learning and storage medium |
CN114022479A (en) * | 2022-01-05 | 2022-02-08 | 高视科技(苏州)有限公司 | Battery tab appearance defect detection method |
EP4266246A1 (en) * | 2022-04-22 | 2023-10-25 | Imec VZW | Automated defect classification and detection |
WO2024021081A1 (en) * | 2022-07-29 | 2024-02-01 | 宁德时代新能源科技股份有限公司 | Method and apparatus for detecting defect on surface of product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9524558B2 (en) | Method, system and software module for foreground extraction | |
CN109086785B (en) | Training method and device for image calibration model | |
EP3150692B1 (en) | Cell evaluation device, method, and program | |
CN111797976A (en) | Neural network training method, image recognition method, device, equipment and medium | |
CN110135302B (en) | Method, device, equipment and storage medium for training lane line recognition model | |
CN109740553B (en) | Image semantic segmentation data screening method and system based on recognition | |
CN110059700A (en) | The recognition methods of image moire fringes, device, computer equipment and storage medium | |
CN109740589A (en) | Asynchronous object ROI detection method and system in video mode | |
US20190311492A1 (en) | Image foreground detection apparatus and method and electronic device | |
CN116485779B (en) | Adaptive wafer defect detection method and device, electronic equipment and storage medium | |
CN103729828A (en) | Video rain removing method | |
CN113449730A (en) | Image processing method, system, automatic walking device and readable storage medium | |
CN117094975A (en) | Method and device for detecting surface defects of steel and electronic equipment | |
CN113989785A (en) | Driving scene classification method, device, equipment and storage medium | |
CN117710376B (en) | Tab defect detection method and device and electronic equipment | |
US20030156759A1 (en) | Background-foreground segmentation using probability models that can provide pixel dependency and incremental training | |
CN117710376A (en) | Tab defect detection method and device and electronic equipment | |
CN111971951B (en) | Arithmetic device, arithmetic method, removable medium, and authentication system | |
CN113223614A (en) | Chromosome karyotype analysis method, system, terminal device and storage medium | |
CN110555344B (en) | Lane line recognition method, lane line recognition device, electronic device, and storage medium | |
CN116363064A (en) | Defect identification method and device integrating target detection model and image segmentation model | |
CN113255766B (en) | Image classification method, device, equipment and storage medium | |
CN115546736A (en) | River channel sand collection monitoring processing method and system based on image collection | |
CN114998172A (en) | Image processing method and related system | |
Ishida et al. | Shadow detection by three shadow models with features robust to illumination changes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |