WO2023231380A1 - 极片缺陷识别及模型训练方法、装置及电子设备 - Google Patents

极片缺陷识别及模型训练方法、装置及电子设备 Download PDF

Info

Publication number
WO2023231380A1
WO2023231380A1 PCT/CN2022/140037 CN2022140037W WO2023231380A1 WO 2023231380 A1 WO2023231380 A1 WO 2023231380A1 CN 2022140037 W CN2022140037 W CN 2022140037W WO 2023231380 A1 WO2023231380 A1 WO 2023231380A1
Authority
WO
WIPO (PCT)
Prior art keywords
pole piece
model
image
defect
original
Prior art date
Application number
PCT/CN2022/140037
Other languages
English (en)
French (fr)
Inventor
李明亮
曾苏珊
杜兵
冯英俊
Original Assignee
广东利元亨智能装备股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东利元亨智能装备股份有限公司 filed Critical 广东利元亨智能装备股份有限公司
Publication of WO2023231380A1 publication Critical patent/WO2023231380A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E60/00Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02E60/10Energy storage using batteries

Definitions

  • the present application relates to the field of model training, specifically, to a pole piece defect identification and model training method, device and electronic equipment.
  • the detection of defects in die-cut materials usually involves enhancing and filtering the pole piece image, then binarizing the pole piece image, and using a specific algorithm to identify defective products of the die-cut material.
  • this method can realize the identification of defective products of die-cut materials.
  • this identification method requires many steps, and each step involves many human settings, resulting in low detection efficiency.
  • the purpose of embodiments of the present application is to provide a pole piece defect identification and model training method, device, electronic equipment and readable storage medium. It can improve the detection efficiency of die-cut materials.
  • embodiments of the present application provide a method for identifying pole piece defects, which includes: inputting the original image of the pole piece into a trained pole piece defect identification model, and obtaining the identification result output by the pole piece defect identification model,
  • the identification results include defect images; wherein, the pole piece defect identification model includes a region sub-model and a defect identification sub-model; the region sub-model is used to extract the pole piece area in the original image of the pole piece to obtain the pole piece. Regional image; the defect identification sub-model is used to extract defects from the pole piece regional image to obtain a defect image.
  • the original image of the pole piece is input into the pole piece defect recognition model, and the image is recognized and processed through the trained model to obtain the recognition result of the original image of the pole piece. Based on the result, the pole piece can be judged. Check whether there are defects in the pole piece corresponding to the original image of the piece. Since the speed of image processing by the model is much faster than that of the image processing device, processing the original image of the pole piece through the model improves the efficiency of pole piece defect identification.
  • the embodiment of the present application provides a first possible implementation manner of the first aspect, wherein: after obtaining the identification result output by the pole piece defect identification model, the method further includes: extracting the The defective area in the defective image; judging whether the pole piece corresponding to the defective image is a defective pole piece according to the defective attribute of the defective area.
  • the pole piece corresponding to the defect image cannot be identified as a defect just because these smaller defects exist in the defect image. Extreme piece. Therefore, after identifying the original image of the pole piece through the defect recognition model, the defect image is output, and the defect area is further judged according to the attributes of the defect area to obtain the defective pole piece, which improves the accuracy of defective pole piece identification.
  • embodiments of the present application also provide a pole piece defect recognition model training method, which includes: inputting multiple pole piece original images marked with characteristic areas into a pre-training model for pre-training, and obtaining pre-trained region sub-images. Model; use the pre-trained regional sub-model to extract regional features from multiple original images of the pole pieces to obtain pole piece region images; input the image to be identified into the model to be trained for training, and obtain the trained pole pieces Defect identification sub-model, the image to be identified is an image of the pole piece area image after defect annotation.
  • the regional sub-model is trained through multiple original pole piece images marked with characteristic areas, and then the pole piece defect recognition sub-model is trained through the pole piece area images marked with defects.
  • pole piece area extraction and defect extraction are performed respectively.
  • Each sub-model is obtained through specific feature training and has high accuracy for specific feature extraction.
  • the model extracts different regions to improve the accuracy of the pole piece defect model.
  • the embodiment of the present application provides a first possible implementation of the second aspect, wherein: multiple original images of pole pieces with feature areas to be trained are put into a pre-training model for pre-training. , obtaining a pre-trained regional sub-model, including: obtaining multiple target images, which are obtained by annotating the characteristic regions of the original pole piece images; combining multiple original pole piece images with multiple The target image is input into the pre-training model for pre-training to obtain a pre-trained regional sub-model; wherein, the pre-trained model is a Python model obtained after constructing a deep learning network under the TensorFlow framework, and the region The sub-model is used to extract the pole piece area in the original pole piece image.
  • the target image and the original image of the pole piece are input into the pre-training model for pre-training, and the pre-training model is trained through the target image and the original image of the pole piece to form a model for identifying the pole.
  • Regional sub-model of patch area The regional sub-model obtained through training can extract the pole piece area in the original pole piece image to ensure that only the pole piece area is identified when identifying pole piece defects, reducing the scope of identification and preventing identification of other areas.
  • the influence of the identification results of the pole pieces not only improves the identification efficiency of the pole piece defect identification model, but also improves the identification accuracy.
  • the embodiment of the present application provides a second possible implementation manner of the second aspect, wherein: acquiring multiple target images includes: according to multiple pole pieces The original image generates a JSON file, and the JSON file includes a plurality of annotated original images of the pole pieces obtained by annotating the feature areas of the plurality of original pole pieces; and a plurality of the annotations are obtained after parsing the JSON file.
  • the original image of the pole piece after the annotation is performed; the annotated original image of the pole piece is binarized to obtain a binary image; the binarized image is processed through OpenCV to obtain the target image.
  • a JSON file is generated based on the original image of the pole piece. Since JSON is a lightweight data exchange format, the JSON file has the advantages of convenient transmission and conversion. By generating a JSON file from the original image of the pole piece, the transmission of the original image of the pole piece is facilitated, and the training efficiency of the regional sub-model is improved.
  • the annotated original image is processed through binarization and OpenCV.
  • the binarization process makes the annotated original image no longer involve multi-level values of pixels, simplifying the annotated original image and facilitating the subsequent processing and transmission of the image. . OpenCV improves the clarity and accuracy of the target image by further processing the binary image.
  • the embodiment of the present application provides a third possible implementation manner of the second aspect, wherein the pre-trained regional model is used to perform Perform regional feature extraction on the original image of the pole piece to obtain a regional image of the pole piece, which includes: inputting multiple original images of the pole piece into a regional sub-model; performing regional feature extraction on the original pole piece image through the regional sub-model to obtain Image of the pole piece area.
  • the trained regional sub-model is directly used to extract the pole piece area image, which reduces the work of pole piece area annotation and improves the pole piece defect identification sub-model. Model training efficiency.
  • embodiments of the present application also provide a pole piece defect identification device, including: an identification module: used to input the original image of the pole piece into a trained pole piece defect identification model to obtain the pole piece defect identification model.
  • Output recognition results, the recognition results include regional images and defect images; wherein, the pole piece defect identification model includes a region sub-model and a defect identification sub-model; the region sub-model is used to extract the original image of the pole piece of the pole piece area to obtain a pole piece area image; the defect identification sub-model is used to extract defects from the pole piece area image to obtain a defect image.
  • embodiments of the present application also provide a pole piece defect recognition model training device, including: a pre-training module: used to put a plurality of pole piece original images with feature areas to be trained into the pre-training model for pre-training. training to obtain a pre-trained regional sub-model; an extraction module: used to extract regional features from a plurality of original pole piece images using the pre-trained regional sub-model to obtain a pole piece region image; annotation module: using The trained pole piece defect recognition sub-model is obtained by performing model training on the image to be identified, where the image to be identified is an image of the pole piece area image after defect annotation.
  • embodiments of the present application further provide an electronic device, including: a processor and a memory.
  • the memory stores machine-readable instructions executable by the processor.
  • the machine-readable instructions are stored in the memory.
  • the steps of the method in the above-mentioned first aspect or any possible implementation of the first aspect are performed.
  • embodiments of the present application further provide a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the computer program executes the above-mentioned first aspect, or any aspect of the first aspect.
  • Figure 1 is a flow chart of a pole piece defect identification method provided by an embodiment of the present application.
  • Figure 2 is a flow chart of the pole piece defect recognition model training method provided by the embodiment of the present application.
  • FIG. 3 is a schematic diagram of the functional modules of the pole piece defect identification device provided by the embodiment of the present application.
  • Figure 4 is a schematic diagram of the functional modules of the pole piece defect recognition model training device provided by the embodiment of the present application.
  • FIG. 5 is a block diagram of an electronic device provided by an embodiment of the present application.
  • die-cutting materials play a pivotal role in various industries, the production volume of die-cutting materials is gradually increasing, and the quality of die-cutting materials affects the quality of the entire equipment. Therefore, how to improve the efficiency and accuracy of defect identification of die-cut materials has become an urgent problem to be solved in the detection of die-cut materials.
  • the inventor of the present application found that the image processing speed of the image recognition model is much faster than that of the ordinary image processing device. Generally speaking, the time it takes for an ordinary image processing device to process an image is about 400ms, while the time it takes for an image recognition model to process an image is about 100ms. In view of this, the inventor of the present application proposes a method for identifying pole piece defects. By inputting the original image of the pole piece into the trained pole piece defect recognition model for identification, the defects of the pole piece are identified based on the recognition results, which greatly improves the efficiency of identification. Improve the efficiency of pole piece defect identification.
  • the pole piece defect identification method disclosed in the embodiment of the present application can be used, but is not limited to, for the detection of die-cut materials, chip detection, electronic screen detection, and glass material detection.
  • the pole piece defect identification model training method of the present application different defect identification models are trained to realize defect identification of various objects to be detected.
  • pole piece defect identification method provided by the embodiment of this application is first introduced in detail.
  • Figure 1 is a flow chart of a pole piece defect identification method provided by an embodiment of the present application. The specific process shown in Figure 1 will be elaborated below.
  • Step 201 Input the original image of the pole piece into the trained pole piece defect recognition model, and obtain the recognition result output by the pole piece defect recognition model.
  • the pole piece defect recognition model includes a region sub-model and a defect identification sub-model; this region sub-model is used to extract the pole piece area in the original image of the pole piece to obtain the pole piece region image; this defect identification sub-model is used to compare the pole piece Defect extraction is performed on the regional image to obtain the defect image.
  • the region sub-model and the defect identification sub-model may be parts of two of the pole piece defect identification models.
  • the pole piece defect identification model is a collective name for the region sub-model and the defect identification sub-model, and the region sub-model and the defect identification sub-model can be two independent models.
  • the original image of the pole piece here is the image of the pole piece collected by the image acquisition device.
  • the original image of the pole piece may include the image of the pole piece, images of other items in the area where the pole piece is located, and the image of the area itself.
  • the above recognition results include defect images and pole piece area images.
  • the defect identification model After the pole piece defect identification model performs defect identification on the original image of the pole piece, for pole pieces without defects, the defect identification model can output the image of the pole piece area, or it can directly proceed to the original image of the next pole piece without outputting the recognition result. Identify. For defective pole pieces, the defect recognition model can output only the defect image, or it can output the defect image and the image of the non-defective area in the pole piece area image.
  • the original image of the pole piece is input into the pole piece defect recognition model, and the image is recognized and processed through the trained model to obtain the recognition result of the original image of the pole piece. Based on the result, the pole piece can be judged. Check whether there are defects in the pole piece corresponding to the original image of the piece. Since the speed of image processing by the model is much faster than that of the image processing device, processing the original image of the pole piece through the model improves the efficiency of pole piece defect identification.
  • the method further includes: extracting a defective area in the defective image, and determining whether the pole piece corresponding to the defective image is a defective pole piece based on the defective attribute of the defective area.
  • the defective area here is the area where the defect belongs in the defect image, and one or more defective areas can exist in a defect image.
  • the defect attribute can be the area of the defect area, the length of the defect area, the width of the defect area, the shape of the defect area, etc.
  • judging whether the pole piece corresponding to the defect image is a defective pole piece based on the defect attribute of the defect area may include: comparing the area of the defect area with the preset area. If the area of the defect area is greater than the preset area, the defect is The pole piece corresponding to the image is a defective pole piece; or, compare the length of the defective area with the preset length.
  • the pole piece corresponding to the defective image is a defective pole piece; or, compare the defective pole piece The width of the area and the preset width. If the width of the defective area is greater than the preset width, the pole piece corresponding to the defect image is a defective pole piece. It can be understood that the above method of determining whether the pole piece corresponding to the defect image is a defective pole piece based on the defect attribute of the defect area is only exemplary. The specific determination method can be adjusted according to the actual situation, and is not specifically limited in this application.
  • the pole piece corresponding to the defect image cannot be identified as a defect just because these smaller defects exist in the defect image. Extreme piece. Therefore, after identifying the original image of the pole piece through the defect recognition model, the defect image is output, and the defect area is further judged according to the attributes of the defect area to obtain the defective pole piece, which improves the accuracy of defective pole piece identification.
  • Figure 2 is a flow chart of a pole piece defect recognition model training method provided by an embodiment of the present application. The specific process shown in Figure 2 will be elaborated below.
  • Step 301 Input multiple original pole pieces images marked with characteristic regions into a pre-training model for pre-training, and obtain a pre-trained regional sub-model.
  • the pre-trained model here can be an image classification model, an image processing model, etc.
  • Le Net model For example, Alex Net model, VGG model, etc.
  • the method further includes: labeling the characteristic region of the original pole piece image.
  • the feature area annotation of the original pole piece image can be done manually or by using image processing software.
  • the specific labeling method can be selected according to the actual situation, and there are no specific restrictions in this application.
  • Step 302 Use the pre-trained regional sub-model to extract regional features from multiple pole piece original images to obtain pole piece region images.
  • pole pieces containing only the pole piece region can be extracted from the original pole piece images.
  • the pole piece area image can also be marked in the pole piece original image.
  • Step 303 input the image to be recognized into the model to be trained for training, and obtain the trained pole piece defect recognition sub-model.
  • the image to be identified here is the image after defect annotation on the pole piece area image.
  • the model to be trained may be an image classification model, an image processing model, etc.
  • Le Net model For example, Alex Net model, VGG model, etc.
  • the model to be trained and the pre-trained model may be the same model or different models.
  • step 302 may be omitted for the pole piece defect recognition model training method.
  • the pole piece defect recognition model training method also includes: inputting a plurality of pole piece original images marked with characteristic areas into a pre-training model for pre-training to obtain a pre-trained regional sub-model; The original image of the pole piece with defects and characteristic areas is input into the model to be trained for training, and the trained pole piece defect recognition sub-model is obtained.
  • the above-mentioned steps of obtaining the pre-trained regional sub-model and the steps of obtaining the trained pole piece defect recognition sub-model can be performed simultaneously.
  • the regional sub-model is trained through multiple original pole piece images marked with characteristic areas, and then the pole piece defect recognition sub-model is trained through the pole piece area images marked with defects.
  • pole piece area extraction and defect extraction are performed respectively.
  • Each sub-model is obtained through specific feature training and has high accuracy for specific feature extraction. Different methods are used. Sub-model extraction of different regions improves the accuracy of the pole piece defect model.
  • step 301 includes: acquiring multiple target images; inputting multiple original pole-piece images and multiple target images into a pre-training model for pre-training, and obtaining a pre-trained regional sub-model.
  • the pre-trained model is obtained by building a deep learning network under the TensorFlow framework using a Python model. This regional sub-model is used to extract the pole patch area in the original pole patch image.
  • the target image here is obtained by labeling the feature area of the original image of the pole piece.
  • the above-mentioned original image of the pole piece can be input to the source image corresponding to the pre-training model, and the target image can be input to the marked image corresponding to the pre-training model.
  • the target image and the original image of the pole piece are input into the pre-training model for pre-training, and the pre-training model is trained through the target image and the original image of the pole piece to form a model for identifying the pole.
  • Regional sub-model of patch area The regional sub-model obtained through training can extract the pole piece area in the original pole piece image to ensure that only the pole piece area is identified when identifying pole piece defects, reducing the scope of identification and preventing identification of other areas.
  • the influence of the identification results of the pole pieces not only improves the identification efficiency of the pole piece defect identification model, but also improves the identification accuracy.
  • obtaining multiple target images includes: generating a JSON file based on multiple original pole piece images; parsing the JSON file to obtain multiple annotated original pole piece images; converting the annotated pole piece original images.
  • the original image of the slice is binarized to obtain a binary image; the binarized image is processed through OpenCV to obtain the target image.
  • the JSON file includes multiple annotated original pole-piece images obtained by labeling feature areas on multiple original pole-piece images.
  • the JSON file can be parsed through manual parsing, Gson parsing, FastJson parsing and other parsing methods.
  • generating a JSON file based on multiple original pole piece images includes: annotating the feature areas of the multiple original pole piece images, saving the annotated original pole piece images in JSON format, and annotating one or more JSON formatted images.
  • the final polar slice original image is packaged into a JSON file.
  • the above feature areas can be annotated using image annotation tools. For example, Labelme, VOTT, Labelmy, Vatic, etc.
  • OpenCV here can perform scaling, cropping, edge filling, etc. on binary images.
  • a JSON file is generated based on the original image of the pole piece. Since JSON is a lightweight data exchange format, the JSON file has the advantages of convenient transmission and conversion. By generating a JSON file from the original image of the pole piece, the transmission of the original image of the pole piece is facilitated, and the training efficiency of the regional sub-model is improved.
  • the annotated original image is processed through binarization and OpenCV.
  • the binarization process makes the annotated original image no longer involve multi-level values of pixels, simplifying the annotated original image and facilitating the subsequent processing and transmission of the image. .
  • OpenCV further processes the binary image to improve the clarity and accuracy of the target image.
  • step 302 includes: inputting multiple original pole piece images into a regional sub-model; performing regional feature extraction on the original pole piece images through the regional sub-model to obtain a pole piece regional image.
  • the original pole piece image here may be the original pole piece image when training the regional sub-model, or it may not be the original pole piece image when training the regional sub-model.
  • the pole piece area image here can be a separate image containing only the pole piece area, or it can be an image after the pole piece area is marked on the original pole piece image.
  • the trained regional sub-model is directly used to extract the pole piece area image, which reduces the work of pole piece area annotation and improves the pole piece defect identification sub-model. Model training efficiency.
  • the embodiments of this application also provide a pole piece defect identification device corresponding to the pole piece defect identification method, because the principle of solving the problem of the device in the embodiment of this application is similar to the aforementioned pole piece defect identification method embodiment. , therefore, the implementation of the device in this embodiment can refer to the description in the embodiment of the above method, and repeated details will not be repeated.
  • FIG. 3 is a schematic diagram of the functional modules of the pole piece defect identification device provided by the embodiment of the present application.
  • Each module in the pole piece defect identification device in this embodiment is used to perform each step in the above method embodiment.
  • the pole piece defect identification device includes an identification module 401; wherein,
  • the recognition module 401 is used to input the original image of the pole piece into the trained pole piece defect recognition model, and obtain the recognition result output by the pole piece defect recognition model.
  • the recognition result includes the defect image; wherein, the pole piece defect recognition model includes a region sub-section. Model and defect identification sub-model; the region sub-model is used to extract the pole piece area in the original pole piece image to obtain the pole piece region image; the defect identification sub-model is used to extract defects from the pole piece region image to obtain the defect image.
  • the pole piece defect identification device further includes an interpretation module; wherein, the interpretation module is used to: extract the defect area in the defect image; and determine whether the pole piece corresponding to the defect image is a defect according to the defect attribute of the defect area. Extreme piece.
  • the embodiments of this application also provide a pole piece defect identification model training device corresponding to the pole piece defect identification model training method. Since the problem-solving principle of the device in the embodiment of this application is consistent with the aforementioned pole piece defect identification The model training method embodiments are similar, so the implementation of the device in this embodiment can refer to the description in the above method embodiments, and repeated details will not be repeated.
  • FIG. 4 is a schematic diagram of the functional modules of the pole piece defect recognition model training device provided by the embodiment of the present application.
  • Each module in the pole piece defect recognition model training device in this embodiment is used to perform each step in the above method embodiment.
  • the pole piece defect recognition model training device includes a pre-training module 501, an extraction module 502, and annotation module 503; wherein,
  • the pre-training module 501 is used to put multiple original images of pole pieces with feature areas to be trained into the pre-training model for pre-training, and obtain a pre-trained regional sub-model.
  • the extraction module 502 is used to extract regional features from multiple pole piece original images using pre-trained regional sub-models to obtain pole piece region images.
  • the labeling module 503 is used to perform model training on the image to be identified, and obtain the trained pole piece defect recognition sub-model.
  • the image to be identified is an image of the pole piece area image after defect labeling.
  • the pre-training module 501 is also used to: obtain multiple target images, which are obtained by annotating the feature areas of the original pole piece images; and combine the multiple original pole piece images with multiple
  • the target image is input into the pre-training model for pre-training to obtain the pre-trained regional sub-model; among them, the pre-trained model is obtained after the Python model builds a deep learning network under the TensorFlow framework, and the regional sub-model is used to extract the original pole piece The pole piece area in the image.
  • the pre-training module 501 is specifically used to: generate a JSON file based on multiple original pole piece images.
  • the JSON file includes multiple annotated features obtained by labeling feature areas on multiple pole piece original images.
  • the extraction module 502 is also used to: input multiple original pole piece images into a regional sub-model; and extract regional features from the original pole piece images through the regional sub-model to obtain a pole piece regional image.
  • the electronic device 100 may include a memory 111, a storage controller 112, a processor 113, a peripheral interface 114, an input/output unit 115, and a display unit 116.
  • a memory 111 may include a main memory 111, a storage controller 112, a processor 113, a peripheral interface 114, an input/output unit 115, and a display unit 116.
  • FIG. 5 is only illustrative and does not limit the structure of the electronic device 100 .
  • electronic device 100 may also include more or fewer components than shown in FIG. 5 , or have a different configuration than shown in FIG. 5 .
  • the above-mentioned components of the memory 111, storage controller 112, processor 113, peripheral interface 114, input and output unit 115 and display unit 116 are directly or indirectly electrically connected to each other to realize data transmission or interaction.
  • these components may be electrically connected to each other through one or more communication buses or signal lines.
  • the above-mentioned processor 113 is used to execute executable modules stored in the memory.
  • the memory 111 can be, but is not limited to, random access memory (Random Access Memory, referred to as RAM), read-only memory (Read Only Memory, referred to as ROM), programmable read-only memory (Programmable Read-Only Memory, referred to as PROM) ), Erasable Programmable Read-Only Memory (EPROM for short), Electrically Erasable Programmable Read-Only Memory (EEPROM for short), etc.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • PROM programmable read-only memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • the memory 111 here can be used to store information such as original pole piece images, pole piece region images, pole piece defect identification models, regional sub-models, and defect identification sub-models.
  • the above-mentioned processor 113 may be an integrated circuit chip with signal processing capabilities.
  • the above-mentioned processor 113 can be a general-purpose processor, including a central processing unit (Central Processing Unit, referred to as CPU), a network processor (Network Processor, referred to as NP), etc.; it can also be a digital signal processor (Digital Signal Processor, referred to as DSP) ), Application Specific Integrated Circuit (ASIC for short), Field Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU Central Processing Unit
  • NP Network Processor
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • processor 113 can be used to pre-train the pre-training model, can also be used to perform training on the to-be-trained model, and can be used to mark feature areas on the original pole piece image and mark defects on the pole piece area image. wait.
  • peripheral interface 114 couples various input/output devices to the processor 113 and the memory 111 .
  • peripheral interface 114, processor 113, and memory controller 112 may be implemented in a single chip. In other instances, they can each be implemented on separate chips.
  • the peripheral interface 114 can be used to connect with an image acquisition device to input the original pole piece image collected by the image acquisition device into the pole piece defect identification model or regional sub-model.
  • the above-mentioned input and output unit 115 is used to provide input data to the user.
  • the input and output unit 115 may be, but is not limited to, a mouse, a keyboard, etc.
  • the above-mentioned display unit 116 provides an interactive interface (such as a user operation interface) between the electronic device 100 and the user or is used to display image data for the user's reference.
  • the display unit may be a liquid crystal display or a touch display. If it is a touch display, it can be a capacitive touch screen or a resistive touch screen that supports single-point and multi-touch operations. Supporting single-point and multi-touch operations means that the touch display can sense touch operations that occur simultaneously from one or more locations on the touch display and transfer the sensed touch operations to the processor for processing Calculation and processing.
  • the above-mentioned annotation of the characteristic area of the original image of the pole piece and the annotation of defects in the original image of the pole piece can be manually marked, and the user can realize the characteristic area by clicking, sliding and other operations on the image displayed on the display unit 116 Annotation or defect annotation.
  • the electronic device 100 in this embodiment can be used to perform each step in each method provided by the embodiment of this application.
  • embodiments of the present application also provide a computer-readable storage medium, which stores a computer program.
  • the computer program is run by a processor, the steps of each method described in the above-mentioned method embodiments are executed.
  • the computer program products of each method provided by the embodiments of the present application include a computer-readable storage medium storing program codes.
  • the instructions included in the program codes can be used to execute the steps of each method described in the above method embodiments. Specifically, Please refer to the above method embodiments, which will not be described again here.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more components for implementing the specified logical function(s). Executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or acts. , or can be implemented using a combination of specialized hardware and computer instructions.
  • each functional module in each embodiment of the present application can be integrated together to form an independent part, each module can exist alone, or two or more modules can be integrated to form an independent part.
  • the functions are implemented in the form of software function modules and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or optical disk and other media that can store program code.
  • relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply that these entities or operations are mutually exclusive. any such actual relationship or sequence exists between them.
  • the terms “comprises,” “comprises,” or any other variations thereof are intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus that includes a list of elements includes not only those elements, but also those not expressly listed other elements, or elements inherent to the process, method, article or equipment.
  • an element defined by the statement "comprising" does not exclude the presence of additional identical elements in a process, method, article, or device that includes the stated element.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供了一种极片缺陷识别及模型训练方法、装置及电子设备,包括:将极片原始图像输入到训练好的极片缺陷识别模型中,获得所述极片缺陷识别模型输出的识别结果,所述识别结果包括缺陷图像;其中,所述极片缺陷识别模型包括区域子模型和缺陷识别子模型;所述区域子模型用于提取所述极片原始图像中的极片区域,得到极片区域图像;所述缺陷识别子模型用于对所述极片区域图像进行缺陷提取,得到缺陷图像。本申请实施例通过将极片原始图像输入到极片缺陷识别模型中,通过模型实现对缺陷极片的识别,基于模型处理图片的速度远大于常规的图像处理方法进行图片处理的速度,提高了极片缺陷识别的效率。

Description

极片缺陷识别及模型训练方法、装置及电子设备 技术领域
本申请涉及模型训练领域,具体而言,涉及一种极片缺陷识别及模型训练方法、装置及电子设备。
背景技术
目前,对模切材料缺陷的检测通常是对极片图像进行增强和滤波后,再对极片图像进行二值化处理,并通过特定的算法进行实现对模切材料的不良产品进行识别。虽然,此种方法能够实现对模切材料的不良产品的识别。但是,该识别方法需要处理的步骤较多,且每个步骤人为参与设定较多,存在检测效率较低的问题。
发明内容
有鉴于此,本申请实施例的目的在于提供一种极片缺陷识别及模型训练方法、装置、电子设备及可读存储介质。能够提高模切材料的检测效率。
第一方面,本申请实施例提供了一种极片缺陷识别方法,包括:将极片原始图像输入到训练好的极片缺陷识别模型中,获得所述极片缺陷识别模型输出的识别结果,所述识别结果包括缺陷图像;其中,所述极片缺陷识别模型包括区域子模型和缺陷识别子模型;所述区域子模型用于提取所述极片原始图像中的极片区域,得到极片区域图像;所述缺陷识别子模型用于对所述极片区域图像进行缺陷提取,得到缺陷图像。
在上述实现过程中,通过将极片原始图像输入到极片缺陷识别模型中,通过训练好的模型对图像进行识别和处理,得到极片原始图像的识别结果,基于该结果可以判断出该极片原始图像对应的极片是否存在缺陷。由于模型处理图像的速度远远快于图像处理装置处理图像的速度,因此,通过模型处理极片原始图像提高了极片缺陷识别的效率。
结合第一方面,本申请实施例提供了第一方面的第一种可能的实施方式,其中:所述获得所述极片缺陷识别模型输出的识别结果之后,所述方法还包括:提取所述缺陷图像中缺陷区域;根据所述缺陷区域的缺陷属性判断所述缺陷图像对应的极片是否为缺陷极片。
在上述实现过程中,由于在实际的缺陷识别中,有些较小的缺陷可以认定为可忽略的缺陷,并不能因为缺陷图像中存在这些较小的缺陷而认定该缺陷图像对应的极片为缺陷极片。 因此,在通过缺陷识别模型进行极片原始图像识别后,输出缺陷图像,进一步根据缺陷区域属性对缺陷区域进行判断,以得到缺陷极片,提高了缺陷极片识别的准确性。
第二方面,本申请实施例还提供一种极片缺陷识别模型训练方法,包括:将多个标注有特征区域的极片原始图像输入预训练模型中进行预训练,获得预训练好的区域子模型;利用所述预训练好的区域子模型对多个所述极片原始图像进行区域特征提取,得到极片区域图像;将待识别图像输入待训练模型中进行训练,获得训练好的极片缺陷识别子模型,所述待识别图像为所述极片区域图像进行缺陷标注后的图像。
在上述实现过程中,通过多个标注有特征区域的极片原始图像进行区域子模型训练,再通过标注有缺陷的极片区域图像进行极片缺陷识别子模型的训练。通过将极片缺陷识别模型分为两个子模型,分别进行极片区域提取和缺陷提取,每个子模型都是通过特定的特性训练得到的,对于特定的特性提取具有较高的准确性,采用分模型提取不同区域提高了极片缺陷模型的准确性。
结合第二方面,本申请实施例提供了第二方面的第一种可能的实施方式,其中:所述将多个带有待训练特征区域的极片原始图像放入到预训练模型中进行预训练,获得预训练好的区域子模型,包括:获取多个目标图像,所述目标图像为对所述极片原始图像进行特征区域标注后得到的;将多个所述极片原始图像与多个所述目标图像输入到所述预训练模型中进行预训练,获得预训练好的区域子模型;其中,所述预训练模型为Python模型在TensorFlow框架下构建深度学习网络后得到的,所述区域子模型用于提取所述极片原始图像中的极片区域。
在上述实现过程中,通过将目标图像和极片原始图像输入到预训练模型中进行预训练,以通过该目标图像和极片原始图像对该预训练模型进行模型训练,以形成用于识别极片区域的区域子模型。通过训练得到该区域子模型可以对极片原始图像中的极片区域进行提取,以保证在进行极片缺陷识别时仅针对极片区域进行识别,减少了识别的范围,预防了其他区域识别对极片的识别结果的影响,在提高了极片缺陷识别模型的识别效率的同时还提高了识别的准确率。
结合第二方面的第一种可能的实施方式,本申请实施例提供了第二方面的第二种可能的实施方式,其中:所述获取多个目标图像,包括:根据多个所述极片原始图像生成JSON文件,所述JSON文件包括多个所述极片原始图像进行特征区域标注后得到的多个标注后的极片原始图像;对所述JSON文件进行解析后得到多个所述标注后的极片原始图像;将所述标注后的极片原始图像进行二值化处理,得到二值化图像;通过OpenCV对所述二值化图像进 行处理,得到目标图像。
在上述实现过程中,通过根据极片原始图像生成JSON文件,由于JSON是一种轻量级的数据交换格式,因此,JSON文件拥有方便传输、方便转换等优点。通过将极片原始图像生成JSON文件方便了极片原始图像的传输,提高了区域子模型训练效率。另外,通过二值化和OpenCV处理标注后的原始图像,二值化处理使得标注后的原始图像不再涉及像素的多级值,简化了标注后的原始图像,有利于图像的后续处理的传输。OpenCV通过对二值化图像进一步进行处理,提高了目标图像的清晰度和精确度。
结合第二方面的第二种可能的实施方式,本申请实施例提供了第二方面的第三种可能的实施方式,其中,所述利用所述预训练好的区域模型对多个所述极片原始图像进行区域特征提取,得到极片区域图像,包括:将多个所述极片原始图像输入到区域子模型;通过所述区域子模型对所述极片原始图像进行区域特征提取,得到极片区域图像。
在上述实现过程中,在进行极片缺陷识别子模型的训练时,直接利用训练好的区域子模型进行极片区域图像的提取,减少了极片区域标注的工作,提高了极片缺陷识别子模型的训练效率。
第三方面,本申请实施例还提供一种极片缺陷识别装置,包括:识别模块:用于将极片原始图像输入到训练好的极片缺陷识别模型中,获得所述极片缺陷识别模型输出的识别结果,所述识别结果包括区域图像和缺陷图像;其中,所述极片缺陷识别模型包括区域子模型和缺陷识别子模型;所述区域子模型用于提取所述极片原始图像中的极片区域,得到极片区域图像;所述缺陷识别子模型用于对所述极片区域图像进行缺陷提取,得到缺陷图像。
第四方面,本申请实施例还提供一种极片缺陷识别模型训练装置,包括:预训练模块:用于将多个带有待训练特征区域的极片原始图像放入到预训练模型中进行预训练,获得预训练好的区域子模型;提取模块:用于利用所述预训练好的区域子模型对多个所述极片原始图像进行区域特征提取,得到极片区域图像;标注模块:用于对待识别图像进行模型训练,获得训练好的极片缺陷识别子模型,所述待识别图像为所述极片区域图像进行缺陷标注后的图像。
第五方面,本申请实施例还提供一种电子设备,包括:处理器、存储器,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述机器可读指令被所述处理器执行时执行上述第一方面,或第一方面的任一种可能的实施方式中的方法的步骤。
第六方面,本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述第一方面,或第一方面的任一种、 第二方面,或第二方面的任一种可能的实施方式中的方法的步骤。
为使本申请的上述目的、特征和优点能更明显易懂,下文特举实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1为本申请实施例提供的极片缺陷识别方法的流程图;
图2为本申请实施例提供的极片缺陷识别模型训练方法的流程图;
图3为本申请实施例提供的极片缺陷识别装置的功能模块示意图;
图4为本申请实施例提供的极片缺陷识别模型训练装置的功能模块示意图;
图5为本申请实施例提供的电子设备的方框示意图。
具体实施方式
下面将结合本申请实施例中附图,对本申请实施例中的技术方案进行描述。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。同时,在本申请的描述中,术语“第一”、“第二”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
目前,随着智能化、工业化的快速发展,电子设备、电器设备、精密仪器、电子通讯等行业也随即进入迅速发展阶段。而模切材料广泛的应用于这些行业,并逐渐在各个行业占领着重要作用,其市场也得到快速发展。
由于模切材料在各个行业都起着举足轻重的作用,模切材料生产量逐渐增大,而模切材料的质量影响着整个设备的质量。因此,如何提高对模切材料的缺陷识别的效率和准确率成为模切材料检测中亟待解决的问题。
本申请发明人在对提升模切材料缺陷识别效率的过程中发现:图像识别模型对图像的处理速度远大于普通的图像处理装置对图像的处理速度。通常来说,普通图像处理装置处理一张图像的时间大约为400ms,而图像识别模型处理一张图像的时间大约为100ms。有鉴于此,本申请发明人提出一种极片缺陷识别方法,通过将极片原始图像输入到训练好的极片缺陷识 别模型中进行识别,根据识别结果对极片的缺陷进行识别,大大提高了极片缺陷识别的效率。
本申请实施例公开的极片缺陷识别方法可以但不限用于模切材料的检测、芯片的检测、电子屏幕的检测、玻璃材料的检测。通过本申请的极片缺陷识别模型训练方法对不同缺陷识别模型进行训练实现多种待检测物的缺陷识别。
为便于对本实施例进行理解,首先对本申请实施例提供的极片缺陷识别方法进行详细介绍。
请参阅图1,是本申请实施例提供的极片缺陷识别方法的流程图。下面将对图1所示的具体流程进行详细阐述。
步骤201,将极片原始图像输入到训练好的极片缺陷识别模型中,获得极片缺陷识别模型输出的识别结果。
其中,极片缺陷识别模型包括区域子模型和缺陷识别子模型;该区域子模型用于提取极片原始图像中的极片区域,得到极片区域图像;该缺陷识别子模型用于对极片区域图像进行缺陷提取,得到缺陷图像。
可以理解地,该区域子模型和缺陷识别子模型可以是极片缺陷识别模型中的两个的部分。在一种情况中,极片缺陷识别模型为区域子模型和缺陷识别子模型的统称,区域子模型和缺陷识别子模型可以是两个分别独立的模型。
这里的极片的原始图像为图像采集设备采集到的极片的图像,该极片原始图像中可以包括极片图像、极片所在区域的其他物品图像及区域本身的图像。
上述的识别结果包括缺陷图像、极片区域图像。
极片缺陷识别模型在对极片原始图像进行缺陷识别后,对于不存在缺陷的极片,该缺陷识别模型可以输出极片区域图像,也可以不输出识别结果直接进行下一极片原始图像的识别。对存在缺陷的极片,该缺陷识别模型可以仅输出缺陷图像,也可以输出缺陷图像和极片区域图像中非缺陷区域的图像。
在上述实现过程中,通过将极片原始图像输入到极片缺陷识别模型中,通过训练好的模型对图像进行识别和处理,得到极片原始图像的识别结果,基于该结果可以判断出该极片原始图像对应的极片是否存在缺陷。由于模型处理图像的速度远远快于图像处理装置处理图像的速度,因此,通过模型处理极片原始图像提高了极片缺陷识别的效率。
在一种可能的实现方式中,在步骤201之后,该方法还包括:提取缺陷图像中缺陷区域,根据缺陷区域的缺陷属性判断缺陷图像对应的极片是否为缺陷极片。
这里的缺陷区域为缺陷图像中缺陷所属区域,一个缺陷图像中可以存在一个或多个缺陷 区域。缺陷属性可以为缺陷区域的面积、缺陷区域的长度、缺陷区域的宽度、缺陷区域的形状等。
在实际的极片缺陷的判定过程中,有的缺陷图像中的缺陷区域较小,可以认定为可忽略缺陷。为了保证极片缺陷识别的准确性,通常会对缺陷图像做进一步的判断。示例性地,这里的根据缺陷区域的缺陷属性判断缺陷图像对应的极片是否为缺陷极片可以包括:比较缺陷区域的面积与预设面积,若缺陷区域的面积大于预设面积,则该缺陷图像对应的极片为缺陷极片;或,比较缺陷区域的长度与预设长度,若存在缺陷区域的长度大于预设长度,则该缺陷图像对应的极片为缺陷极片;或,比较缺陷区域的宽度与预设宽度,若缺陷区域的宽度大于预设宽度,则该缺陷图像对应的极片为缺陷极片。可以理解地,上述根据缺陷区域的缺陷属性判断缺陷图像对应的极片是否为缺陷极片的方法仅是示例性地,具体的判断方式可以根据实际情况进行调整,本申请不做具体限制。
在上述实现过程中,由于在实际的缺陷识别中,有些较小的缺陷可以认定为可忽略的缺陷,并不能因为缺陷图像中存在这些较小的缺陷而认定该缺陷图像对应的极片为缺陷极片。因此,在通过缺陷识别模型进行极片原始图像识别后,输出缺陷图像,进一步根据缺陷区域属性对缺陷区域进行判断,以得到缺陷极片,提高了缺陷极片识别的准确性。
请参阅图2,是本申请实施例提供的极片缺陷识别模型训练方法的流程图。下面将对图2所示的具体流程进行详细阐述。
步骤301,将多个标注有特征区域的极片原始图像输入预训练模型中进行预训练,获得预训练好的区域子模型。
这里的预训练模型可以是图像分类模型、图像处理模型等。例如,Le Net模型、Alex Net模型、VGG模型等。
可以理解地,在步骤301之前,该方法还包括:对极片原始图像进行特征区域标注。可选地,对极片原始图像进行特征区域标注可以通过人工进行标注、也可以通过图像处理软件进行标注。具体的标注方法可以根据实际情况选择,本申请不做具体限制。
步骤302,利用预训练好的区域子模型对多个极片原始图像进行区域特征提取,得到极片区域图像。
可以理解地,通过将多个极片原始图像输入到预训练好的区域子模型,该区域子模型对极片原始图像进行处理,可以从极片原始图像中提取出只包含极片区域的极片区域图像,也可以在极片原始图像中对极片区域进行标注。
步骤303,将待识别图像输入待训练模型中进行训练,获得训练好的极片缺陷识别子模 型。
这里的待识别图像为极片区域图像进行缺陷标注后的图像。
可以理解地,待训练模型可以是图像分类模型、图像处理模型等。例如,Le Net模型、Alex Net模型、VGG模型等。该待训练模型与预训练模型可以是同一个模型,也可以是不同模型。
若区域子模型的极片原始图像进行处理的方式为:在极片原始图像中对极片区域进行标注,则对于该极片缺陷识别模型训练方法,步骤302可以省略。
在一种实施例中,该极片缺陷识别模型训练方法还包括:将多个标注有特征区域的极片原始图像输入预训练模型中进行预训练,获得预训练好的区域子模型;将标注有缺陷和特征区域的极片原始图像输入待训练模型中进行训练,获得训练好的极片缺陷识别子模型。
在一些实施方式中,上述获得预训练好的区域子模型的步骤和获得训练好的极片缺陷识别子模型的步骤可以同时进行。
在上述实现过程中,通过多个标注有特征区域的极片原始图像进行区域子模型训练,再通过标注有缺陷的极片区域图像进行极片缺陷识别子模型的训练。通过将极片缺陷识别模型分为两个子模型,分别进行极片区域提取和缺陷提取,每个子模型都是通过特定的特性训练得到的,对于特定的特性提取具有较高的准确性,采用不同子模型提取不同区域提高了极片缺陷模型的准确性。
在一种可能的实现方式中,步骤301包括:获取多个目标图像;将多个极片原始图像与多个目标图像输入到预训练模型中进行预训练,获得预训练好的区域子模型。
其中,该预训练模型为Python模型在TensorFlow框架下构建深度学习网络后得到的,该区域子模型用于提取极片原始图像中的极片区域。
这里的目标图像为对极片原始图像进行特征区域标注后得到的。
可以理解地,上述极片原始图像可以输入到预训练模型对应的源图片下,目标图像可以输入到预训练模型对应的标记图片下。
在上述实现过程中,通过将目标图像和极片原始图像输入到预训练模型中进行预训练,以通过该目标图像和极片原始图像对该预训练模型进行模型训练,以形成用于识别极片区域的区域子模型。通过训练得到该区域子模型可以对极片原始图像中的极片区域进行提取,以保证在进行极片缺陷识别时仅针对极片区域进行识别,减少了识别的范围,预防了其他区域识别对极片的识别结果的影响,在提高了极片缺陷识别模型的识别效率的同时还提高了识别的准确率。
在一种可能的实现方式中,获取多个目标图像,包括:根据多个极片原始图像生成JSON文件;对JSON文件进行解析后得到多个标注后的极片原始图像;将标注后的极片原始图像进行二值化处理,得到二值化图像;通过OpenCV对二值化图像进行处理,得到目标图像。
其中,JSON文件包括多个极片原始图像进行特征区域标注后得到的多个标注后的极片原始图像。该对JSON文件进行解析可以通过手动解析、Gson解析、FastJson解析等解析方式进行解析。
可以理解地,根据多个极片原始图像生成JSON文件包括:将多个极片原始图像进行特征区域标注,将标注后的极片原始图像保存为JSON格式,将一个或多个JSON格式的标注后的极片原始图像打包形成JSON文件。上述特征区域的标注可以通过图像标注工具进行标注。例如,Labelme、VOTT、Labellmy、Vatic等。
这里的OpenCV可以对二值化图像进行缩放、裁剪、补边等处理。
在上述实现过程中,通过根据极片原始图像生成JSON文件,由于JSON是一种轻量级的数据交换格式,因此,JSON文件拥有方便传输、方便转换等优点。通过将极片原始图像生成JSON文件方便了极片原始图像的传输,提高了区域子模型训练效率。另外,通过二值化和OpenCV处理标注后的原始图像,二值化处理使得标注后的原始图像不再涉及像素的多级值,简化了标注后的原始图像,有利于图像的后续处理的传输。OpenCV通过对二值化图像进一步进行处理,以提高目标图像的清晰度和精确度。
在一种可能的实现方式中,步骤302包括:将多个极片原始图像输入到区域子模型;通过区域子模型对极片原始图像进行区域特征提取,得到极片区域图像。
可选地,这里的极片原始图像可以是进行区域子模型训练时的极片原始图像,也可以不是进行区域子模型训练时的极片原始图像。
这里的极片区域图像可以是仅包含极片区域的单独的图像,也可以是对极片原始图像进行极片区域标注后的图像。
在上述实现过程中,在进行极片缺陷识别子模型的训练时,直接利用训练好的区域子模型进行极片区域图像的提取,减少了极片区域标注的工作,提高了极片缺陷识别子模型的训练效率。
基于同一申请构思,本申请实施例中还提供了与极片缺陷识别方法对应的极片缺陷识别装置,由于本申请实施例中的装置解决问题的原理与前述的极片缺陷识别方法实施例相似,因此本实施例中的装置的实施可以参见上述方法的实施例中的描述,重复之处不再赘述。
请参阅图3,是本申请实施例提供的极片缺陷识别装置的功能模块示意图。本实施例中 的极片缺陷识别装置中的各个模块用于执行上述方法实施例中的各个步骤。极片缺陷识别装置包括识别模块401;其中,
识别模块401用于将极片原始图像输入到训练好的极片缺陷识别模型中,获得极片缺陷识别模型输出的识别结果,该识别结果包括缺陷图像;其中,极片缺陷识别模型包括区域子模型和缺陷识别子模型;区域子模型用于提取极片原始图像中的极片区域,得到极片区域图像;缺陷识别子模型用于对极片区域图像进行缺陷提取,得到缺陷图像。
一种可能的实施方式中,该极片缺陷识别装置还包括判读模块;其中,该判读模块用于:提取缺陷图像中缺陷区域;根据缺陷区域的缺陷属性判断缺陷图像对应的极片是否为缺陷极片。
基于同一申请构思,本申请实施例中还提供了与极片缺陷识别模型训练方法对应的极片缺陷识别模型训练装置,由于本申请实施例中的装置解决问题的原理与前述的极片缺陷识别模型训练方法实施例相似,因此本实施例中的装置的实施可以参见上述方法的实施例中的描述,重复之处不再赘述。
请参阅图4,是本申请实施例提供的极片缺陷识别模型训练装置的功能模块示意图。本实施例中的极片缺陷识别模型训练装置中的各个模块用于执行上述方法实施例中的各个步骤。极片缺陷识别模型训练装置包括预训练模块501、提取模块502、标注模块503;其中,
预训练模块501用于将多个带有待训练特征区域的极片原始图像放入到预训练模型中进行预训练,获得预训练好的区域子模型。
提取模块502用于利用预训练好的区域子模型对多个极片原始图像进行区域特征提取,得到极片区域图像。
标注模块503用于对待识别图像进行模型训练,获得训练好的极片缺陷识别子模型,该待识别图像为极片区域图像进行缺陷标注后的图像。
一种可能的实施方式中,预训练模块501,还用于:获取多个目标图像,该目标图像为对极片原始图像进行特征区域标注后得到的;将多个极片原始图像与多个目标图像输入到预训练模型中进行预训练,获得预训练好的区域子模型;其中,预训练模型为Python模型在TensorFlow框架下构建深度学习网络后得到的,区域子模型用于提取极片原始图像中的极片区域。
一种可能的实施方式中,预训练模块501,具体用于:根据多个极片原始图像生成JSON文件,该JSON文件包括多个极片原始图像进行特征区域标注后得到的多个标注后的极片原始图像;对JSON文件进行解析后得到多个标注后的极片原始图像;将标注后的极片原始图 像进行二值化处理,得到二值化图像;通过OpenCV对二值化图像进行处理,得到目标图像。
一种可能的实施方式中,提取模块502,还用于:将多个极片原始图像输入到区域子模型;通过区域子模型对极片原始图像进行区域特征提取,得到极片区域图像。
为便于对本实施例进行理解,下面对执行本申请实施例所公开的极片缺陷识别方法及极片缺陷识别模型训练方法的电子设备进行详细介绍。
如图5所示,是电子设备的方框示意图。电子设备100可以包括存储器111、存储控制器112、处理器113、外设接口114、输入输出单元115、显示单元116。本领域普通技术人员可以理解,图5所示的结构仅为示意,其并不对电子设备100的结构造成限定。例如,电子设备100还可包括比图5中所示更多或者更少的组件,或者具有与图5所示不同的配置。
上述的存储器111、存储控制器112、处理器113、外设接口114、输入输出单元115及显示单元116各元件相互之间直接或间接地电性连接,以实现数据的传输或交互。例如,这些元件相互之间可通过一条或多条通讯总线或信号线实现电性连接。上述的处理器113用于执行存储器中存储的可执行模块。
其中,存储器111可以是,但不限于,随机存取存储器(Random Access Memory,简称RAM),只读存储器(Read Only Memory,简称ROM),可编程只读存储器(Programmable Read-Only Memory,简称PROM),可擦除只读存储器(Erasable Programmable Read-Only Memory,简称EPROM),电可擦除只读存储器(Electric Erasable Programmable Read-Only Memory,简称EEPROM)等。其中,存储器111用于存储程序,所述处理器113在接收到执行指令后,执行所述程序,本申请实施例任一实施例揭示的过程定义的电子设备100所执行的方法可以应用于处理器113中,或者由处理器113实现。
这里的存储器111可以用于存储极片原始图像、极片区域图像、极片缺陷识别模型、区域子模型以及缺陷识别子模型等信息。
上述的处理器113可能是一种集成电路芯片,具有信号的处理能力。上述的处理器113可以是通用处理器,包括中央处理器(Central Processing Unit,简称CPU)、网络处理器(Network Processor,简称NP)等;还可以是数字信号处理器(Digital Signal Processor,简称DSP)、专用集成电路(Application Specific Integrated Circuit,简称ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
可以理解地,上述的处理器113可以用于对预训练模型进行预训练,也可以对待训练模 型进行待训练、以及可以用于对极片原始图像进行特征区域标注、极片区域图像进行缺陷标注等。
上述的外设接口114将各种输入/输出装置耦合至处理器113以及存储器111。在一些实施例中,外设接口114,处理器113以及存储控制器112可以在单个芯片中实现。在其他一些实例中,他们可以分别由独立的芯片实现。示例性地,该外设接口114可以用于与图像采集设备进行连接,以将图像采集设备采集到的极片原始图像输入到极片缺陷识别模型或区域子模型。
上述的输入输出单元115用于提供给用户输入数据。所述输入输出单元115可以是,但不限于,鼠标和键盘等。
上述的显示单元116在电子设备100与用户之间提供一个交互界面(例如用户操作界面)或用于显示图像数据给用户参考。在本实施例中,所述显示单元可以是液晶显示器或触控显示器。若为触控显示器,其可为支持单点和多点触控操作的电容式触控屏或电阻式触控屏等。支持单点和多点触控操作是指触控显示器能感应到来自该触控显示器上一个或多个位置处同时产生的触控操作,并将该感应到的触控操作交由处理器进行计算和处理。
可以理解地,上述的对极片原始图像的特征区域标注以及对极片原始图像进行缺陷进行标注可以通过人为进行标注,用户通过在显示单元116上显示的图像进行点击、滑动等操作实现特征区域标注或缺陷标注。
本实施例中的电子设备100可以用于执行本申请实施例提供的各个方法中的各个步骤。
此外,本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的各个方法的步骤。
本申请实施例所提供的各个方法的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行上述方法实施例中所述的各个方法的步骤,具体可参见上述方法实施例,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,也可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,附图中的流程图和框图显示了根据本申请的多个实施例的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现方式中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相 反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
另外,在本申请各个实施例中的各功能模块可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或两个以上模块集成形成一个独立的部分。
所述功能如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅为本申请的优选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (10)

  1. 一种极片缺陷识别方法,其特征在于,包括:
    将极片原始图像输入到训练好的极片缺陷识别模型中,获得所述极片缺陷识别模型输出的识别结果,所述识别结果包括缺陷图像;
    其中,所述极片缺陷识别模型包括区域子模型和缺陷识别子模型;所述区域子模型用于提取所述极片原始图像中的极片区域,得到极片区域图像;所述缺陷识别子模型用于对所述极片区域图像进行缺陷提取,得到缺陷图像。
  2. 根据权利要求1所述的方法,其特征在于,所述获得所述极片缺陷识别模型输出的识别结果之后,所述方法还包括:
    提取所述缺陷图像中缺陷区域;
    根据所述缺陷区域的缺陷属性判断所述缺陷图像对应的极片是否为缺陷极片。
  3. 一种极片缺陷识别模型训练方法,其特征在于,包括:
    将多个标注有特征区域的极片原始图像输入预训练模型中进行预训练,获得预训练好的区域子模型;
    利用所述预训练好的区域子模型对多个所述极片原始图像进行区域特征提取,得到极片区域图像;
    将待识别图像输入待训练模型中进行训练,获得训练好的极片缺陷识别子模型,所述待识别图像为所述极片区域图像进行缺陷标注后的图像。
  4. 根据权利要求3所述的方法,其特征在于,所述将多个带有待训练特征区域的极片原始图像放入到预训练模型中进行预训练,获得预训练好的区域子模型,包括:
    获取多个目标图像,所述目标图像为对所述极片原始图像进行特征区域标注后得到的;
    将多个所述极片原始图像与多个所述目标图像输入到所述预训练模型中进行预训练,获得预训练好的区域子模型;
    其中,所述预训练模型为Python模型在TensorFlow框架下构建深度学习网络后得到的,所述区域子模型用于提取所述极片原始图像中的极片区域。
  5. 根据权利要求4所述的方法,其特征在于,所述获取多个目标图像,包括:
    根据多个所述极片原始图像生成JSON文件,所述JSON文件包括多个所述极片原始图像进行特征区域标注后得到的多个标注后的极片原始图像;
    对所述JSON文件进行解析后得到多个所述标注后的极片原始图像;
    将所述标注后的极片原始图像进行二值化处理,得到二值化图像;
    通过OpenCV对所述二值化图像进行处理,得到目标图像。
  6. 根据权利要求3所述的方法,其特征在于,所述利用所述预训练好的区域模型对多个所述极片原始图像进行区域特征提取,得到极片区域图像,包括:
    将多个所述极片原始图像输入到区域子模型;
    通过所述区域子模型对所述极片原始图像进行区域特征提取,得到极片区域图像。
  7. 一种极片缺陷识别装置,其特征在于,包括:
    识别模块:用于将极片原始图像输入到训练好的极片缺陷识别模型中,获得所述极片缺陷识别模型输出的识别结果,所述识别结果包括区域图像和缺陷图像;
    其中,所述极片缺陷识别模型包括区域子模型和缺陷识别子模型;所述区域子模型用于提取所述极片原始图像中的极片区域,得到极片区域图像;所述缺陷识别子模型用于对所述极片区域图像进行缺陷提取,得到缺陷图像。
  8. 一种极片缺陷识别模型训练装置,其特征在于,包括:
    预训练模块:用于将多个带有待训练特征区域的极片原始图像放入到预训练模型中进行预训练,获得预训练好的区域子模型;
    提取模块:用于利用所述预训练好的区域子模型对多个所述极片原始图像进行区域特征提取,得到极片区域图像;
    标注模块:用于对待识别图像进行模型训练,获得训练好的极片缺陷识别子模型,所述待识别图像为所述极片区域图像进行缺陷标注后的图像。
  9. 一种电子设备,其特征在于,包括:处理器、存储器,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述机器可读指令被所述处理器执行时执行如权利要求1至6任一所述的方法的步骤。
  10. 一种计算机可读存储介质,其特征在于,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至6任一所述的方法的步骤。
PCT/CN2022/140037 2022-05-31 2022-12-19 极片缺陷识别及模型训练方法、装置及电子设备 WO2023231380A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210607956.2A CN114897868A (zh) 2022-05-31 2022-05-31 极片缺陷识别及模型训练方法、装置及电子设备
CN202210607956.2 2022-05-31

Publications (1)

Publication Number Publication Date
WO2023231380A1 true WO2023231380A1 (zh) 2023-12-07

Family

ID=82725113

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/140037 WO2023231380A1 (zh) 2022-05-31 2022-12-19 极片缺陷识别及模型训练方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN114897868A (zh)
WO (1) WO2023231380A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114897868A (zh) * 2022-05-31 2022-08-12 广东利元亨智能装备股份有限公司 极片缺陷识别及模型训练方法、装置及电子设备
CN116168030B (zh) * 2023-04-25 2023-11-14 宁德时代新能源科技股份有限公司 极片的缺陷检测方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598721A (zh) * 2018-12-10 2019-04-09 广州市易鸿智能装备有限公司 电池极片的缺陷检测方法、装置、检测设备和存储介质
US20200364842A1 (en) * 2019-05-13 2020-11-19 Fujitsu Limited Surface defect identification method and apparatus
CN113870208A (zh) * 2021-09-22 2021-12-31 上海联麓半导体技术有限公司 半导体图像处理方法、装置、计算机设备和存储介质
CN114037645A (zh) * 2020-07-20 2022-02-11 耿晋 极片的涂布缺陷检测方法、装置、电子设备及可读介质
CN114897868A (zh) * 2022-05-31 2022-08-12 广东利元亨智能装备股份有限公司 极片缺陷识别及模型训练方法、装置及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598721A (zh) * 2018-12-10 2019-04-09 广州市易鸿智能装备有限公司 电池极片的缺陷检测方法、装置、检测设备和存储介质
US20200364842A1 (en) * 2019-05-13 2020-11-19 Fujitsu Limited Surface defect identification method and apparatus
CN114037645A (zh) * 2020-07-20 2022-02-11 耿晋 极片的涂布缺陷检测方法、装置、电子设备及可读介质
CN113870208A (zh) * 2021-09-22 2021-12-31 上海联麓半导体技术有限公司 半导体图像处理方法、装置、计算机设备和存储介质
CN114897868A (zh) * 2022-05-31 2022-08-12 广东利元亨智能装备股份有限公司 极片缺陷识别及模型训练方法、装置及电子设备

Also Published As

Publication number Publication date
CN114897868A (zh) 2022-08-12

Similar Documents

Publication Publication Date Title
WO2023231380A1 (zh) 极片缺陷识别及模型训练方法、装置及电子设备
CN107016387B (zh) 一种识别标签的方法及装置
WO2020238054A1 (zh) Pdf文档中图表的定位方法、装置及计算机设备
US20180204360A1 (en) Automatic data extraction from a digital image
CN110751143A (zh) 一种电子发票信息的提取方法及电子设备
WO2019072181A1 (zh) 骨髓细胞标记方法和系统
US11341319B2 (en) Visual data mapping
KR102002024B1 (ko) 객체 라벨링 처리 방법 및 객체 관리 서버
CN113837151B (zh) 表格图像处理方法、装置、计算机设备及可读存储介质
CN110741376A (zh) 用于不同自然语言的自动文档分析
CN113239807B (zh) 训练票据识别模型和票据识别的方法和装置
CN111191012A (zh) 知识图谱产生装置、方法及其计算机程序产品
CN113627439A (zh) 文本结构化处理方法、处理装置、电子设备以及存储介质
CN114005126A (zh) 表格重构方法、装置、计算机设备及可读存储介质
CN111680750A (zh) 图像识别方法、装置和设备
CN112860905A (zh) 文本信息抽取方法、装置、设备及可读存储介质
CN110363206B (zh) 数据对象的聚类、数据处理及数据识别方法
CN107168635A (zh) 信息呈现方法和装置
CN114495146A (zh) 图像文本检测方法、装置、计算机设备及存储介质
CN113408323B (zh) 表格信息的提取方法、装置、设备及存储介质
CN112613367A (zh) 票据信息文本框获取方法、系统、设备及存储介质
CN111597936A (zh) 基于深度学习的人脸数据集标注方法、系统、终端及介质
CN115934928A (zh) 一种信息抽取方法、装置、设备及存储介质
Liu et al. Automatic comic page image understanding based on edge segment analysis
Zheng et al. Recognition of expiry data on food packages based on improved DBNet

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22944681

Country of ref document: EP

Kind code of ref document: A1