CN117934470A - Model training method, defect detection device, model training equipment and storage medium - Google Patents

Model training method, defect detection device, model training equipment and storage medium Download PDF

Info

Publication number
CN117934470A
CN117934470A CN202410330834.2A CN202410330834A CN117934470A CN 117934470 A CN117934470 A CN 117934470A CN 202410330834 A CN202410330834 A CN 202410330834A CN 117934470 A CN117934470 A CN 117934470A
Authority
CN
China
Prior art keywords
defect
determining
area
character
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410330834.2A
Other languages
Chinese (zh)
Inventor
曹霞霞
胡兹晨
张岚嵩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Contemporary Amperex Technology Co Ltd
Original Assignee
Contemporary Amperex Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Contemporary Amperex Technology Co Ltd filed Critical Contemporary Amperex Technology Co Ltd
Priority to CN202410330834.2A priority Critical patent/CN117934470A/en
Publication of CN117934470A publication Critical patent/CN117934470A/en
Pending legal-status Critical Current

Links

Landscapes

  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The application relates to a model training method, a defect detection method, a device, equipment and a storage medium. The model training method comprises the following steps: obtaining defect characteristic information of various defects to be detected of the battery pack; clustering is carried out according to defect characteristic information of each defect, and a plurality of defects are divided into at least one defect class; performing defect detection model training on each defect class by adopting different training modes to obtain a defect detection model corresponding to each defect class; the training sample size required by different training modes is different; the defect characteristic information includes defect clarity level, and the higher the defect clarity level is, the less training sample is required. According to the method, the training mode of using high training sample size for part of defects in various defects can be realized, the training mode of using low training sample size for part of defects is not needed, the training mode of using high training sample size for all defects is not needed, the overall required training sample size is reduced, the training time is shortened, and the efficiency is improved.

Description

Model training method, defect detection device, model training equipment and storage medium
Technical Field
The present application relates to the field of battery detection technologies, and in particular, to a model training method, a defect detection method, a device, equipment, and a storage medium.
Background
The battery pack in the new energy automobile is usually arranged below the whole automobile bottom plate, and is easy to damage due to external impact. Therefore, it is necessary to detect defects in the battery pack and to improve the battery pack's resistance to external impacts.
In the related art, a defect detection model obtained through training is generally used to detect various defects of a battery pack. However, the training process of the defect detection model in the related art is tedious and time-consuming, and has the problem of low training efficiency.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a model training method, a defect detection method, a device, an apparatus, and a storage medium, which can improve model training efficiency.
In a first aspect, an embodiment of the present application provides a model training method, including:
Obtaining defect characteristic information of various defects to be detected of the battery pack;
clustering is carried out according to defect characteristic information of each defect, and a plurality of defects are divided into at least one defect class;
Performing defect detection model training on each defect class by adopting different training modes to obtain a defect detection model corresponding to each defect class; the training sample size required by different training modes is different; the defect characteristic information includes defect clarity level, and the higher the defect clarity level is, the less training sample is required.
In the embodiment of the application, the defect type of each defect is determined through the defect characteristic information of the plurality of defects, so that a corresponding defect detection model is obtained by adopting a training mode of the corresponding defect type, the training mode of using a high training sample size for part of defects in the plurality of defects can be realized, the training mode of using a low training sample size for part of defects is realized, the training mode of using the high training sample size for all defects is not needed, the defect detection model is trained by adopting the training modes of different training sample sizes needed in a mode of distinguishing the defects, the overall needed training sample size is reduced, the training time is shortened, and the training efficiency is improved.
In one embodiment, obtaining defect characteristic information of a plurality of defects to be detected of a battery pack includes:
Obtaining a defect image and a reference image of the defect aiming at each defect; defects exist in the defect image, and defects do not exist in the reference image;
And determining defect characteristic information of the defects according to the defect image and the reference image.
In the embodiment of the application, aiming at each defect, defect characteristic information reflecting the degree of differentiation between the defect and the background is obtained based on the defect image with the defect and the reference image without the defect, so that the self-characteristics of the defect are fully reflected, the follow-up determination of a training mode matched with the self-characteristics of the defect is facilitated, the self-characteristics of the defect are diversified, and the diversified training mode is correspondingly realized, thereby realizing the reduction of the training sample quantity required by part of the characteristics, shortening the training time consumption and improving the training efficiency.
In one embodiment, determining defect characteristic information of a defect from a defect image and a reference image includes:
Acquiring a gray level difference value between a defect image of the defect and a reference image;
and determining the gray level difference value as defect characteristic information of the defect.
In the embodiment of the application, the gray difference value between the defect image and the reference image is used as the defect characteristic information of the corresponding defect, the defect characteristic information is quantized, the fineness of the defect characteristic information is improved, the accuracy of dividing the defects based on the defect characteristic information into various defects is improved, and the accuracy of a training mode determined based on the defect class obtained by dividing is correspondingly improved, so that the defect is treated differently, the defect detection model training is carried out by adopting the training modes with different training sample amounts, the overall required training sample amount is reduced, the training time is shortened, and the training efficiency is improved.
In one embodiment, clustering is performed according to defect characteristic information of each defect, and the plurality of defects are divided into at least one defect class, including:
clustering defect characteristic information of each defect to obtain a plurality of characteristic sets;
and correspondingly dividing the defects in each feature set into a defect class.
In the embodiment of the application, the defects are divided into at least one defect class by clustering the defect characteristic information, so that the accuracy of dividing the defect class is improved, and the accuracy of a training mode determined based on the defect class obtained by dividing is improved.
In one embodiment, the defect feature information includes a gray scale difference value; clustering the defect characteristic information of each defect to obtain a plurality of characteristic sets, wherein the clustering comprises the following steps:
clustering gray scale difference values greater than or equal to a first threshold value into a feature set;
Clustering gray scale difference values smaller than a second threshold value into a feature set; the second threshold is less than the first threshold;
the gray difference values smaller than the first threshold and larger than or equal to the second threshold are clustered into a feature set.
In the embodiment of the application, the gray level difference value of each defect is compared with the first threshold value and the second threshold value, so that the defect characteristic information of each defect is clustered in a threshold value comparison mode, the clustering process is simplified, and the clustering efficiency and the model training efficiency are improved.
In one embodiment, performing training of the defect detection model on each defect class by using different training modes to obtain a defect detection model corresponding to each defect class, including:
aiming at each defect class, obtaining the defect obvious degree grade of the defect class;
determining a target training mode corresponding to the defect class according to the defect obvious degree grade of the defect class;
Training the defect detection model in a target training mode to obtain a defect detection model corresponding to the defect class.
In the embodiment of the application, the target training mode is determined based on the defect obvious degree grade of the defect class, so that the adaptation degree between the defect in the defect class and the training mode is improved, and the reliability of defect detection by the defect detection model obtained by training is correspondingly improved.
In one embodiment, determining the target training mode corresponding to the defect class according to the defect clarity level of the defect class includes:
Under the condition that the defect obvious degree grade of the defect class is high, determining that a target training mode corresponding to the defect class is an unsupervised training mode;
Under the condition that the defect obvious degree grade of the defect class is a medium grade, determining that a target training mode corresponding to the defect class is a small sample training mode;
And under the condition that the defect obvious degree grade of the defect class is a low grade, determining the target training mode corresponding to the defect class as a supervised training mode.
In the embodiment of the application, corresponding target training modes are provided for different defect obvious degree grades, fine granularity division of the defect obvious degree grades is realized, and a defect detection model with higher reliability is obtained by adopting the corresponding target training modes, so that the defect detection reliability is improved.
In a second aspect, an embodiment of the present application further provides a defect detection method, including:
acquiring a to-be-detected image of a to-be-detected battery pack;
detecting different defects of the image to be detected through a defect detection model to obtain a defect detection result of the battery pack to be detected; the defect detection model is obtained by training by adopting any model training method.
According to the embodiment of the application, the defect detection model training is realized by adopting a mode of distinguishing defects and adopting a training mode with different required training sample amounts, and the integral required training sample amount is reduced, so that the training time consumption is shortened, and the training efficiency is improved.
In one embodiment, the image to be inspected comprises images acquired at different viewing angles; detecting different defects of the image to be detected through a defect detection model to obtain a defect detection result of the battery pack to be detected, wherein the defect detection result comprises the following steps:
Determining a target image from images acquired at different viewing angles;
detecting different defects in the target image through a defect detection model to obtain detection results of the defects;
and determining the defect detection result of the battery pack to be detected according to the detection result of each defect.
In the embodiment of the application, different defects in the target image are detected based on the defect detection model, so that the detection result of each defect is obtained, and the defect detection result of the battery pack to be detected is determined, so that multi-directional multi-defect detection of the battery pack to be detected is realized, the detection comprehensiveness is improved, and the comprehensiveness of the obtained defect detection result is correspondingly improved.
In one embodiment, detecting different defects in the target image by using the defect detection model to obtain a detection result of each defect includes:
aiming at each defect, acquiring an interested region corresponding to the defect in the target image;
And determining the detection result of the defect according to the defect detection model and the region of interest.
In the embodiment of the application, aiming at each defect, the detection result of the defect is determined according to the region of interest corresponding to the defect in the target image, so that the detection range is reduced, the detection pertinence is improved, and the detection efficiency and the accuracy of the detection result are improved.
In one embodiment, the target image is an image acquired under a global view angle, and the region of interest is a battery pack region; determining a detection result of the defect according to the defect detection model and the region of interest, wherein the detection result comprises:
Inputting the battery pack area into a first detection model to obtain a plurality of defect areas in the battery pack area; the first detection model represents a model for detecting the appearance defects of the parts in the defect detection model;
and determining the appearance defect detection result of the part of the battery pack to be detected according to the defect areas.
In the embodiment of the application, the target image is the image acquired under the global visual angle, the region of interest is the battery pack region, the detection of the appearance defects of the parts on the battery pack to be detected is realized based on the first detection model, and the accuracy of the detection result is improved while the detection efficiency is improved.
In one embodiment, determining a detection result of appearance defects of a part of the battery pack to be inspected according to the defect areas includes:
according to the confidence of each defect area, obtaining obvious defect areas and confusing defect areas in a plurality of defect areas;
And determining the appearance defect detection result of the part of the battery pack to be detected according to the obvious defect area and the confusing defect area.
In the embodiment of the application, the confidence coefficient of the defect area is utilized to classify a plurality of defect areas into the obvious defect area and the confusing defect area, so that the obvious defect area and the confusing defect area with different confidence coefficients are considered to comprehensively determine the appearance defect detection result of the part, thereby improving the accuracy of the result.
In one embodiment, obtaining the distinct defect region and the confusing defect region in the plurality of defect regions according to the confidence of each defect region comprises:
determining a defect area with the confidence coefficient greater than or equal to the confidence coefficient threshold value as an obvious defect area;
and determining the defect area with the confidence coefficient smaller than the confidence coefficient threshold value as a confusing defect area.
In the embodiment of the application, the obvious defect area and the confusing defect area are distinguished by adopting a mode of comparing the confidence coefficient with the confidence coefficient threshold value, the classification process of the defect area is simplified, the classification time consumption is saved, and the detection efficiency is correspondingly improved.
In one embodiment, determining the detection result of the appearance defect of the part of the battery pack to be detected according to the obvious defect area and the confusing defect area includes:
inputting the confusion defect area into a second detection model for screening to obtain candidate confusion defect areas; the second detection model is a model after the first detection model is corrected;
And determining the appearance defect detection result of the part of the battery pack to be detected according to the obvious defect area and the candidate confusion defect area.
In the embodiment of the application, the confusion defect area is screened by using the second detection model, the non-defect area is filtered, and the reliability of the candidate confusion defect area obtained after screening as the area where the defect is located is improved, so that the reliability of the detection result of the appearance defect of the part is improved.
In one embodiment, determining the detection result of the appearance defect of the part of the battery pack to be detected according to the obvious defect area and the candidate confusion defect area includes:
Obtaining defect sizes of the obvious defect area and the candidate confusion defect area;
Determining a target defect area in the obvious defect area and the candidate confusing defect area according to each defect size;
And determining the target defect area as a detection result of the appearance defect of the part of the battery pack to be detected.
In the embodiment of the application, the obvious defect area and the candidate confusion defect area are screened by verifying the two-dimensional defect sizes of the obvious defect area and the candidate confusion defect area, so that the reliability of the detection result of the appearance defect of the part is improved.
In one embodiment, obtaining defect sizes for the distinct defect region and the candidate confusing defect region includes:
Performing binarization processing on the obvious defect area and the candidate confusion defect area in the battery pack area to obtain an obvious defect mask image and a candidate defect mask image;
The mask size of the obvious defect mask image is taken as the defect size of the obvious defect area, and the mask size of the candidate defect mask image is taken as the defect size of the candidate confusing defect area.
In the embodiment of the application, the defect size is determined by using the defect mask image obtained by binarization processing, and the binarization processing can obtain the finer and accurate region boundary of the defect region, thereby correspondingly improving the fineness and accuracy of the obtained defect size.
In one embodiment, the target image is an image acquired under a local view angle, and the region of interest is a character region; determining a detection result of the defect according to the defect detection model and the region of interest, including:
inputting the character area into a third detection model; the third detection model represents a model for detecting poor printing of characters in the detection model;
and determining a character printing defect detection result of the battery pack to be detected through a third detection model.
In the embodiment of the application, the target image is the image acquired under the local visual angle, the interested area is the character area, the detection of poor character printing on the battery pack to be detected is realized based on the third detection model, and the accuracy of the detection result is improved while the detection efficiency is improved.
In one embodiment, determining, by the third detection model, a character misprinting detection result of the battery pack to be detected includes:
Detecting character content and display state of the character area through a third detection model;
determining a content detection result according to the character content, and determining a display detection result according to the display state;
And determining the content detection result and the display detection result as character printing defect detection results of the battery pack to be detected.
In the embodiment of the application, the character area is not only checked for character content, but also the display state is detected, so that the detection of defects in multiple aspects of the appearance of the battery pack to be detected is realized, and the diversity and the comprehensiveness of the detection are improved.
In one embodiment, determining the content detection result according to the character content includes:
under the condition that the character content is matched with the reference character, determining that the content detection result is qualified;
And under the condition that the character content is not matched with the reference character, determining that the content detection result is unqualified.
In the embodiment of the application, the character content is matched with the reference character to determine the content detection result, the process is simple and easy to realize, and the detection efficiency is correspondingly improved.
In one embodiment, the third detection model includes a ghost detection unit and an overlap detection unit; detecting, by the third detection model, a display state of the character region, including:
inputting the character area into a ghost detection unit to determine the state of the character ghost;
Inputting the character area into an overlap detection unit to determine the character overlap state;
The character ghost state and the character overlap state are determined as the display state of the character region.
In the embodiment of the application, the character region is not only checked for the character ghost state, but also the character overlapping state is detected, so that the detection for the multi-aspect display state on the character region is realized, and the diversity and the comprehensiveness of the detection are improved.
In a third aspect, an embodiment of the present application further provides a model training apparatus, including:
the characteristic acquisition module is used for acquiring defect characteristic information of various defects to be detected of the battery pack;
The defect dividing module is used for carrying out clustering processing according to defect characteristic information of each defect and dividing a plurality of defects into at least one defect class;
The model training module is used for training the defect detection model by adopting different training modes for each defect class to obtain a defect detection model corresponding to each defect class; the training sample size required by different training modes is different; the defect characteristic information includes defect clarity level, and the higher the defect clarity level is, the less training sample is required.
In a fourth aspect, an embodiment of the present application further provides a defect detection model, including:
the image acquisition module is used for acquiring a to-be-detected image of the to-be-detected battery pack;
The defect detection module is used for detecting different defects of the image to be detected through the defect detection model to obtain a defect detection result of the battery pack to be detected; the defect detection model is obtained by training by adopting any model training method.
In a fifth aspect, an embodiment of the present application further provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps in the method provided by any one of the embodiments of the first aspect and the second aspect when the processor executes the computer program.
In a sixth aspect, embodiments of the present application further provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method provided by any of the embodiments of the first and second aspects described above.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present application more readily apparent.
Drawings
FIG. 1 is an internal block diagram of a computer device in one embodiment;
FIG. 2 is a flow diagram of a model training method in one embodiment;
FIG. 3 is a flow chart of acquiring defect feature information in one embodiment;
FIG. 4 is a flowchart of another embodiment for obtaining defect feature information;
FIG. 5 is a flow diagram of partitioning defect classes in one embodiment;
FIG. 6 is a flow diagram of a feature set obtained in one embodiment;
FIG. 7 is a flow diagram of obtaining a defect detection model in one embodiment;
FIG. 8 is a flow chart of a method of determining a target training pattern in one embodiment;
FIG. 9 is a flow chart of a defect detection method according to one embodiment;
FIG. 10 is a flow chart of a defect detection result according to an embodiment;
FIG. 11 is a flow chart illustrating a determination of a detection result of a defect in one embodiment;
FIG. 12 is a flow chart of determining a detection result of a defect in another embodiment;
FIG. 13 is a flowchart illustrating a process for determining a defect detection result of a part appearance in an embodiment;
FIG. 14 is a flow diagram of acquiring distinct defect regions and confusing defect regions in one embodiment;
FIG. 15 is a flowchart illustrating a process for determining the appearance defect detection result of a component according to another embodiment;
FIG. 16 is a flowchart illustrating a process for determining the appearance defect detection result of a component according to another embodiment;
FIG. 17 is a flow chart of determining defect sizes in one embodiment;
FIG. 18 is a flow chart of determining a character misregistration detection result in one embodiment;
FIG. 19 is a flowchart of determining a character misregistration detection result in another embodiment;
FIG. 20 is a flow chart of determining content detection results in one embodiment;
FIG. 21 is a flow chart illustrating a method for detecting a display state of a character area according to an embodiment;
FIG. 22 is a flow chart of a defect detection method according to another embodiment;
FIG. 23 is a flow chart of a defect detection method according to another embodiment;
FIG. 24 is a flow chart of a defect detection method according to another embodiment;
FIG. 25 is a block diagram of a model training method in one embodiment;
FIG. 26 is a block diagram of a defect detection method in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the term "comprising" and any variations thereof in the description of the application and the claims and the description of the figures above is intended to cover a non-exclusive inclusion.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the description of the embodiments of the present application, the term "and/or" is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In the description of embodiments of the present application, the term "plurality" refers to two or more (including two) unless specifically defined otherwise.
Along with the prominence of resources and environmental problems, new energy automobiles gradually replace traditional fuel automobiles, and become the dominant vehicles in daily life and work of people. Battery packs (typically lithium battery packs) play a key role in new energy vehicles as a source of energy for the new energy vehicles.
The battery pack is generally arranged below the whole vehicle bottom plate, so that the battery pack is easily damaged due to external impact in the running process of the vehicle, the service life of the battery pack is influenced, and safety accidents are seriously caused. Therefore, it is necessary to detect defects of the battery pack before shipment in order to improve the battery pack's ability to withstand external impacts.
In the related art, an appearance defect of the battery pack is usually detected by adopting a manual visual inspection mode, and the manual visual inspection is time-consuming and labor-consuming, and has the problems that a detection result is influenced by subjective consciousness of people, an actual defect condition cannot be objectively reflected, the false detection rate is high, the omission rate is high and the like. Therefore, in the related art, an artificial intelligence technology is introduced into the defect detection of the battery pack, and a trained defect detection model is adopted to detect various defects of the battery pack so as to replace the artificial visual inspection, thereby improving the efficiency and the stability and accuracy of the detection result.
However, in the related art, one or more defect detection models are obtained by training in a unified training manner so as to detect multiple defects. In order to ensure the detection effect of different defects, the unified training mode adopted needs a large training sample size, so that the defect detection model training process is tedious and time-consuming, and the problem of low training efficiency exists. However, different defects have different self-characteristics, so that the obvious degree of the defects is different. Therefore, aiming at various defects to be detected, a defect detection model is not required to be obtained by adopting a training mode with large required training sample size, the defects can be treated differently, part of the defects use a training mode with high training sample size, and part of the defects use a training mode with low training sample size.
Based on the above, the embodiment of the application provides a model training test method, which adopts a training mode with different training sample sizes to perform defect detection model training by means of distinguishing defects, so as to reduce the overall required training sample size, thereby achieving the technical effects of shortening training time consumption and improving training efficiency.
The method is illustrated as applied to the computing device of fig. 1. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a model training method.
It will be appreciated by those skilled in the art that the architecture shown in fig. 1 is merely a block diagram of some of the architecture relevant to the embodiments of the present application and is not intended to limit the computer device to which the embodiments of the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, the embodiment of the present application provides a model training method, as shown in fig. 2, which includes the following steps:
s210, obtaining defect characteristic information of various defects to be detected of the battery pack.
The defects to be detected of the battery pack are defects required to be inspected visually and comprise various types. For example, appearance defects of various parts on the battery pack, such as label scratches/breakage, bolt gasket deformation, case deformation, and the like, may be included. The defect-characteristic information is used to characterize the image characteristics of the defect.
Optionally, the computer device may read a plurality of defects to be detected of the battery pack stored in advance, or may accept a plurality of defects to be detected of the battery pack input by a user, and acquire a defect image of each defect, so as to determine an image feature of a corresponding defect according to the defect image of each defect, and obtain defect feature information of a plurality of defects to be detected of the battery pack.
S220, clustering is carried out according to defect characteristic information of each defect, and multiple defects are divided into at least one defect class.
Wherein, the defect characteristic information of the defects in the same defect class has higher information similarity.
Optionally, after obtaining the defect characteristic information of each defect, the computer device may obtain the information similarity between every two defect characteristic information to perform clustering processing, and divide the multiple defects according to the information similarity between every two defect characteristic information, so as to divide the multiple defects into at least one defect class.
Illustratively, the information similarity between defect signature information may be characterized by a quantified characteristic deviation value between defect signature information. The larger the characteristic deviation value is, the lower the information similarity between the corresponding defect characteristic information is; conversely, the smaller the feature deviation value is, the higher the information similarity between the corresponding defect feature information is. The computer equipment can acquire the characteristic deviation value between every two defect characteristic information, and divide the defects corresponding to the two defect characteristic information with the characteristic deviation value smaller than the deviation threshold value into the same defect class, so that a plurality of defects are divided into at least one defect class.
S230, training the defect detection model of each defect class by adopting different training modes to obtain a defect detection model corresponding to each defect class; the training sample size required by different training modes is different; the defect characteristic information includes defect clarity level, and the higher the defect clarity level is, the less training sample is required.
The defect detection model is a detection model obtained by training a training sample and is used for detecting different defects of an input to-be-detected image of the to-be-detected battery pack, obtaining a detection result of whether corresponding defects exist in the to-be-detected image, and determining a defect detection result of the to-be-detected battery pack. Illustratively, the defect detection model may be a deep learning model, such as a convolutional neural network (Convolutional Neural Networks, CNN) model.
Optionally, after dividing the defects to be detected of the battery pack into at least one defect class, the computer device may determine a training mode corresponding to each defect class according to a corresponding relationship between a preset defect class and the training mode, so as to perform training of the defect detection model by using the training mode, and obtain a defect detection model corresponding to each defect class. The defect characteristic information may include a defect level of significance, and for defects with a lower defect level of significance, the model may be trained in a training manner with a larger required training sample size, and for defects with a higher defect level of significance, the model may be trained in a training manner with a smaller required training sample size. The defect detection model corresponding to the defect class is used for detecting defects in the corresponding defect class.
For example, the defect class a includes defects A1-A5, the defect class B includes defects B1-B3, and a training mode 1 requiring a high training sample size corresponding to the defect class a is determined according to a preset corresponding relationship, and a training mode 2 requiring a low training sample size corresponding to the defect class B. The computer equipment performs the defect detection model training in the training mode 1 to obtain a defect detection model M1 corresponding to the defect class A, and performs the defect detection model training in the training mode 1 to obtain a defect detection model M2 corresponding to the defect class B.
Alternatively, the defect detection model corresponding to the same defect class may be one or more, so as to detect all defects in the corresponding defect class, and each defect detection model may be used to detect at least one defect in the corresponding defect class. Illustratively, continuing the above example, defect class A corresponds to a defect detection model M1, which defect detection model M1 is used to detect all defects in defect class A, namely defects A1-A5; the defect class B corresponds to three defect detection models M2, and the defect detection models are used for detecting defects B1-B3 in the defect class B respectively.
In the embodiment of the application, the defect type of each defect is determined through the defect characteristic information of the plurality of defects, so that a corresponding defect detection model is obtained by adopting a training mode of the corresponding defect type, the training mode of using a high training sample size for part of defects in the plurality of defects can be realized, the training mode of using a low training sample size for part of defects is realized, the training mode of using the high training sample size for all defects is not needed, the defect detection model is trained by adopting the training modes of different training sample sizes needed in a mode of distinguishing the defects, the overall needed training sample size is reduced, the training time is shortened, and the training efficiency is improved.
The defect characteristic information may be obtained based on a defective image in which a defect exists and a reference image in which a defect does not exist. Based on this, in one embodiment, as shown in fig. 3, S210 described above, acquires defect feature information of a plurality of defects to be detected in the battery pack, including:
s310, aiming at each defect, obtaining a defect image and a reference image of the defect; defects exist in the defect image, and no defects exist in the reference image.
Wherein each defect corresponds to a group of defect images and reference images with the same background environment. Illustratively, for each defect, the defect image is an image in which only the corresponding defect is present, and the reference image is an image in which no defect is present.
Optionally, a plurality of defects to be detected of the battery pack may be preset and stored in a computer device, and the computer device reads the plurality of defects to be detected and acquires a corresponding defect image and a reference image in an image database for each defect. Wherein a plurality of defect images marked with defect types and reference images without any defects are stored in the image database.
S320, determining defect characteristic information of the defects according to the defect images and the reference images.
Wherein, since the reference image does not include a defect, the defect characteristic information may be used to reflect the degree of differentiation between the defect and the background. For example, the defect characteristic information may be represented by image similarity between the defect image and the reference image, and the larger the image similarity between the defect image and the reference image is, the smaller the degree of differentiation between the corresponding defect and the background is represented; conversely, the smaller the image similarity between the defect image and the reference image, the greater the degree of differentiation between the corresponding defect and the background is characterized.
Optionally, after obtaining the defect image and the reference image of each defect, the computer device may perform a comparison analysis on the defect image and the reference image of each defect, to obtain an image similarity between the defect image and the reference image of the corresponding defect, and use the image similarity as defect feature information of the corresponding defect.
In the embodiment of the application, aiming at each defect, defect characteristic information reflecting the degree of differentiation between the defect and the background is obtained based on the defect image with the defect and the reference image without the defect, so that the self-characteristics of the defect are fully reflected, the follow-up determination of a training mode matched with the self-characteristics of the defect is facilitated, the self-characteristics of the defect are diversified, and the diversified training mode is correspondingly realized, thereby realizing the reduction of the training sample quantity required by part of the characteristics, shortening the training time consumption and improving the training efficiency.
The defect characteristic information of the defect can be characterized by using gray level difference values between the defect image and the reference image. Based on this, in one embodiment, as shown in fig. 4, the determining defect characteristic information of the defect according to the defect image and the reference image, S320 includes:
s410, acquiring a gray level difference value between a defect image of the defect and a reference image.
Wherein the gray scale difference value may be determined based on a gray scale histogram of the image.
Alternatively, for each frame defect, the computer device may acquire a gray level histogram of the corresponding defect image and a gray level histogram of the reference image, and determine a gray level difference value between the defect image and the reference image from the gray level histograms of the two. For example, the computer device may acquire a gray average value having a ratio greater than a preset ratio in gray histograms of the defect image and the reference image, respectively, and acquire a difference between the gray average values of the defect image and the reference image as a gray difference value between the defect image and the reference image.
S420, determining the gray level difference value as defect characteristic information of the defect.
Optionally, after obtaining the gray scale difference value corresponding to each defect, the computer device uses the gray scale difference value corresponding to each defect as defect characteristic information of the defect.
In the embodiment of the application, the gray difference value between the defect image and the reference image is used as the defect characteristic information of the corresponding defect, the defect characteristic information is quantized, the fineness of the defect characteristic information is improved, the accuracy of dividing the defects based on the defect characteristic information into various defects is improved, and the accuracy of a training mode determined based on the defect class obtained by dividing is correspondingly improved, so that the defect is treated differently, the defect detection model training is carried out by adopting the training modes with different training sample amounts, the overall required training sample amount is reduced, the training time is shortened, and the training efficiency is improved.
In practical application, a clustering processing mode can be adopted to divide various defects. Based on this, in one embodiment, as shown in fig. 5, the step S220 of clustering according to the defect characteristic information of each defect, divides the plurality of defects into at least one defect class, includes:
s510, clustering defect characteristic information of each defect to obtain a plurality of characteristic sets.
Optionally, after obtaining the defect feature information of each defect, the computer device may perform clustering processing on the defect feature information of each defect by using a clustering algorithm, so as to divide the defects with higher information similarity into the same feature set, thereby dividing multiple defects into multiple feature sets. The clustering algorithm described above may be implemented based on a machine learning model, for example, and may be other mathematical algorithms, such as a K-means clustering algorithm.
S520, correspondingly dividing defects in each feature set into a defect class.
Optionally, after dividing the plurality of defects to be detected of the battery pack into a plurality of feature sets, the computer device correspondingly divides the defects in each feature set into a defect class. Illustratively, the computer device divides 10 defects (defects 1-10) to be detected of the battery pack into 2 feature sets, namely feature set 1{ defect 1-3, defect 8-10}, feature set 2{ defect 4-7}, and the computer device divides the defects 1-3, defect 8-10 into a defect class and defect 4-7 into a defect class.
In the embodiment of the application, the defects are divided into at least one defect class by clustering the defect characteristic information, so that the accuracy of dividing the defect class is improved, and the accuracy of a training mode determined based on the defect class obtained by dividing is improved.
In the case where the defect feature information includes a gray scale difference value, in one embodiment, as shown in fig. 6, the step S510 of clustering the defect feature information of each defect to obtain a plurality of feature sets includes:
s610, clustering gray scale difference values which are larger than or equal to a first threshold value into a feature set.
Optionally, in the case that the defect characteristic information includes a gray scale difference value, the computer device may perform clustering processing on the defect characteristic information of each defect by using a threshold comparison manner, so as to obtain a plurality of characteristic sets.
The first threshold is an upper threshold with a larger value. The gray level difference value is larger than or equal to the first threshold value, and the corresponding defect is characterized by larger obvious degree in the image.
Alternatively, the computer device may compare the grayscale difference value of each defect to a first threshold and cluster the grayscale difference values greater than or equal to the first threshold as one feature set.
S620, clustering gray difference values smaller than a second threshold value into a feature set; the second threshold is less than the first threshold.
The second threshold is a lower threshold with smaller value. The gray level difference value is smaller than the second threshold value, and the corresponding defect is characterized by smaller obvious degree in the image.
Alternatively, the computer device may compare the grayscale difference value of each defect to a second threshold and cluster grayscale difference values less than the second threshold as one feature set.
S630, clustering gray scale difference values smaller than the first threshold and larger than or equal to the second threshold into a feature set.
The gray level difference value is smaller than the first threshold value and larger than or equal to the second threshold value, and the corresponding defect is represented to be obviously centered in the image.
Alternatively, the computer device may compare the grayscale difference value of each defect to a first threshold and a second threshold and cluster the grayscale difference values less than the first threshold and greater than or equal to the second threshold into one feature set.
In the embodiment of the application, the gray level difference value of each defect is compared with the first threshold value and the second threshold value, so that the defect characteristic information of each defect is clustered in a threshold value comparison mode, the clustering process is simplified, and the clustering efficiency and the model training efficiency are improved.
The level of defect clarity for the different defect classes determines the training patterns employed. Therefore, in one embodiment, as shown in fig. 7, in S230, the training of the defect detection model is performed by using different training methods for each defect class, to obtain a defect detection model corresponding to each defect class, which includes:
s710, aiming at each defect class, obtaining the defect obvious degree grade of the defect class.
The defect obviously degree grade of the defect class is characterized by the obviously degree of the defect in the defect class in the image, and the higher the defect obviously degree grade is, the higher the obviously degree of the defect in the image is correspondingly characterized. Illustratively, the defect level may be determined based on an image similarity between the defect image and the reference image, or may be determined based on a gray scale difference value between the defect image and the reference image. The higher the image similarity is, the smaller the gray difference value is, and the lower the defect obvious degree grade is represented; conversely, the lower the image similarity, the larger the gray difference value, and the higher the defect-representing obvious degree level.
Optionally, for each defect class, the computer device may obtain a gray level difference value between the defect image and the reference image of each defect in the defect class, and obtain an average value of gray level difference values corresponding to all defects in the defect class, so as to determine the defect obviously degree level of the corresponding defect class according to the average value of the gray level difference values. For example, the computer device may compare the mean value of the gray scale difference values to a mean threshold value and determine a defect level of the defect class based on the comparison. For example, in the case where the average value of the gradation difference values is greater than or equal to the average value threshold value, it is determined that the defect conspicuity level of the defect class is a high level; and under the condition that the average value of the gray difference values is smaller than the average value threshold value, determining that the defect of the defect class is low in defect obvious degree.
S720, determining a target training mode corresponding to the defect class according to the defect obviously degree grade of the defect class.
Optionally, after obtaining the defect obvious degree grade of each defect class, the computer device may determine a training mode of the corresponding defect class according to a corresponding relation between the preset defect obvious degree grade and the training mode, and use the training mode as a target training mode corresponding to the defect class. Illustratively, the correspondence between the preset defect clarity level and the training mode includes: the higher the defect obviously degree level is, the training mode with smaller required training sample size is corresponding; the lower the level of defect clarity, the more training patterns corresponding to the number of training samples required.
S730, training the defect detection model in a target training mode to obtain a defect detection model corresponding to the defect class.
Optionally, for each defect class, the computer device acquires a training sample of a corresponding defect in the defect class, and trains the defect detection model in a target training mode by adopting the training sample of the corresponding defect to obtain a defect detection model corresponding to the defect class.
In the embodiment of the application, the target training mode is determined based on the defect obvious degree grade of the defect class, so that the adaptation degree between the defect in the defect class and the training mode is improved, and the reliability of defect detection by the defect detection model obtained by training is correspondingly improved.
In one embodiment, as shown in fig. 8, S720, the determining, according to the defect level of the defect class, the target training method corresponding to the defect class includes:
S810, determining that a target training mode corresponding to the defect class is an unsupervised training mode under the condition that the defect obviously degree grade of the defect class is high.
Wherein, the defect obviously degree grade of the defect class is that the defects in the defect class are obvious in the image. For defects on a battery pack, defects in the defect class having a high defect level are generally defects of parts, foreign materials, or mounting errors. Exemplary include, but are not limited to:
And (3) a bolt: and (5) mounting without drawing lines and leakage.
Hanging ear hole: foreign matter.
Explosion-proof valve: installation errors.
A connector: and (5) missing installation and incorrect installation.
A sleeve: deletion.
Two-dimensional code: deletion.
Optionally, in the case that the defect level of the defect class is high, the computer device adopts an unsupervised training mode with the minimum required training sample size as the target training mode of the corresponding defect class. The non-supervision training mode does not need a defect sample, only needs to train by adopting a normal sample without defects to obtain a defect detection model, and the required training sample quantity is minimum.
S820, determining that the target training mode corresponding to the defect class is a small sample training mode under the condition that the defect obvious degree grade of the defect class is a medium grade.
Wherein a defect level of defect clarity of the defect class is a medium level to indicate that defects in the defect class are more evident in the image. Defects in the class of defects with a defect clarity level of a medium level are typically poor appearance (e.g., deformation or breakage) of large structures/regions for defects on the battery pack. Exemplary include, but are not limited to:
case/lid/bracket surface: burrs;
PVC coating: color difference, damage and atomization;
A connector: cracking, breakage and deformation;
Explosion-proof valve: scratch and foreign matter.
Optionally, in the case that the defect level of the defect class is a middle level, the computer device adopts a small sample training mode with a small required training sample size as the target training mode of the corresponding defect class. On the basis of the existing defect detection model, the small sample training mode adopts a small amount of defect samples and normal samples marked with corresponding defects to finely tune the existing defect detection model to obtain the defect detection model, and the required training sample amount is small.
S830, determining that the target training mode corresponding to the defect class is a supervised training mode under the condition that the defect obviously degree level of the defect class is low.
Wherein a defect level of defect clarity of the defect class is a low level that characterizes defects in the defect class as not being apparent in the image. Defects in the defect class with a low level of defect clarity are typically poor appearance (e.g., deformation or breakage) of small structures/regions for defects on the battery pack. Exemplary include, but are not limited to:
Tag, nameplate: poor silk screen content, scratches or breakage;
and (3) a bolt: the gasket is deformed;
A shell: deforming;
Hanging ear hole: poor appearance;
case/lid/bracket surface: scratches, gouges, surface color differences, paint drops, dirt, cracks and damages;
a connector: foreign matter and scratch in the cavity;
Pin needle: missing, skew.
Optionally, in the case that the defect level of the defect class is a low level, the computer device adopts the supervised training mode with the largest required training sample size as the target training mode of the corresponding defect class. The supervised training mode is to iterate the initial model continuously by adopting a large number of defect samples and normal samples marked with corresponding defects so as to optimize model parameters to obtain a defect detection model, and the required training sample amount is the largest.
The training sample amounts required by the unsupervised training method, the small sample training method and the supervised training method are sequentially increased, and the method is respectively suitable for the defects with high, medium and low defect obvious degree grades.
In the embodiment of the application, corresponding target training modes are provided for different defect obvious degree grades, fine granularity division of the defect obvious degree grades is realized, and a defect detection model with higher reliability is obtained by adopting the corresponding target training modes, so that the defect detection reliability is improved.
The application also provides a defect detection method which is applied to the computer equipment shown in the figure 1. The computer device may also be referred to as a vision machine for performing defect detection.
In one embodiment, as shown in FIG. 9, the method includes:
S910, obtaining a to-be-detected image of the to-be-detected battery pack.
The to-be-detected image of the to-be-detected battery pack is an image for detecting appearance defects on the to-be-detected battery pack. Illustratively, the image to be inspected is a visible light image, such as an RGB image.
Optionally, the computer device may acquire the to-be-inspected image of the to-be-inspected battery pack through the image acquisition device, and may also receive the to-be-inspected image of the to-be-inspected battery pack input by the user. The surface of the battery pack to be detected is provided with an identification code representing the identity of the battery pack to be detected, and the computer equipment has an automatic code reading function and can identify the identification code of the battery pack to be detected based on the image to be detected. The identification code may be a two-dimensional code or a bar code located on the surface of the battery pack to be inspected, for example.
S920, detecting different defects of the image to be detected through a defect detection model to obtain a defect detection result of the battery pack to be detected; the defect detection model is obtained by training by adopting any model training method.
Optionally, a defect detection model for detecting defects in the image is carried in the computer device, the computer device can input the image to be detected of the battery pack to be detected into the defect detection model, and different defects of the image to be detected are detected through the defect detection model, so that whether the corresponding defects in the image to be detected are detected is recorded, and a defect detection result of the battery pack to be detected is determined.
According to the embodiment of the application, the defect detection model training is realized by adopting a mode of distinguishing defects and adopting a training mode with different required training sample amounts, and the integral required training sample amount is reduced, so that the training time consumption is shortened, and the training efficiency is improved.
The image to be detected comprises images acquired under different visual angles. Based on this, in one embodiment, as shown in fig. 10, the step S920 of detecting different defects of the image to be detected by the defect detection model to obtain a defect detection result of the battery pack to be detected includes:
S1010, determining a target image from images acquired at different viewing angles.
In general, the battery pack to be inspected needs to be inspected on multiple surfaces, and the image to be inspected includes images collected under multiple different viewing angles to cover the multiple surfaces. Taking a cubic battery pack to be inspected as an example, the inspection of 5 faces in total is generally performed on four faces around and on the upper surface. The surface of the battery pack to be detected, which is in contact with the bearing table, is the lower surface.
The target image is an image acquired under a target visual angle. The target image may be an image acquired at a global view angle or an image acquired at a local view angle, for example.
Optionally, the computer device determines an image acquired at the target viewing angle from among the images acquired at the different viewing angles as a target image for subsequent detection.
S1020, detecting different defects in the target image through a defect detection model to obtain detection results of the defects.
Optionally, the computer device inputs the determined target image into a defect detection model, and detects different defects of the target image through the defect detection model to obtain a detection result of whether each defect exists in the target image. The defect detection model may be one or more to detect all defects required to be detected in the battery pack to be detected, or may be multiple to detect different defects respectively.
S1030, determining the defect detection result of the battery pack to be detected according to the detection result of each defect.
Optionally, after the computer device obtains the detection result of whether the corresponding defect exists in the target image, the detection results of all defects are summarized and used as the defect detection result of the battery pack to be detected.
In the embodiment of the application, different defects in the target image are detected based on the defect detection model, so that the detection result of each defect is obtained, and the defect detection result of the battery pack to be detected is determined, so that multi-directional multi-defect detection of the battery pack to be detected is realized, the detection comprehensiveness is improved, and the comprehensiveness of the obtained defect detection result is correspondingly improved.
In order to improve accuracy of the detection result of each defect, in one embodiment, as shown in fig. 11, S1020, the detecting the different defects in the target image by the defect detection model, to obtain the detection result of each defect, includes:
s1110, aiming at each defect, acquiring an interested region corresponding to the defect in the target image.
The region of interest (Region of Interest, ROI) is used to characterize the location on the battery pack to be inspected where defect detection is performed. Different defects may correspond to different regions of interest, or may correspond to the same region of interest.
Optionally, the computer device may determine, for each defect, a region of interest corresponding to the defect in the target image according to a correspondence between the defect and the region of interest, so as to extract the region of interest in the target image. Illustratively, the region of interest corresponding to the part appearance defect is a battery pack region in the target image; the region of interest corresponding to the character misregistration is a character region in the target image. The character area is an area with characters in the target image, such as a nameplate or a label on the battery pack to be detected.
Optionally, the computer device may perform battery pack detection on the target image, determine and crop a battery pack area in the target image, and use the battery pack area as the region of interest corresponding to the appearance defect of the part. Similarly, the computer device can perform character detection on the target image, determine and cut out a character area in the target image, and take the battery pack area as an interested area corresponding to poor character printing.
S1120, determining a detection result of the defect according to the defect detection model and the region of interest.
Optionally, after obtaining the region of interest in the target image, the computer device inputs the obtained region of interest into a defect detection model, detects different defects in the region of interest through the defect detection model to obtain detection results of the defects,
In the embodiment of the application, aiming at each defect, the detection result of the defect is determined according to the region of interest corresponding to the defect in the target image, so that the detection range is reduced, the detection pertinence is improved, and the detection efficiency and the accuracy of the detection result are improved.
In the case where the target image is an image acquired under the global view angle and the region of interest is a battery pack region, in one embodiment, as shown in fig. 12, S1120, determining the detection result of the defect according to the defect detection model and the region of interest includes:
s1210, inputting the battery pack area into a first detection model to obtain a plurality of defect areas in the battery pack area; the first inspection model represents a model for inspecting an appearance defect of the component among the defect inspection models.
Wherein, the appearance defects of the parts comprise various types. For example, bolt washer deformation, box/cover/bracket surface burrs, tab hole foreign matter, and the like may be included. The first detection model may be one or more to detect all the appearance defects of the parts, so as to detect the appearance defects of different parts respectively.
Optionally, the computer device inputs the battery pack area into a first detection model, and detects different appearance defects of the parts in the battery pack area through the first detection model to obtain a plurality of defect areas with appearance defects of the parts in the battery pack area.
S1220, determining the appearance defect detection result of the part of the battery pack to be detected according to the defect areas.
Optionally, after obtaining the plurality of defect areas, the computer device may use the appearance defects of the parts in each defect area as the appearance defect detection result of the parts of the battery pack to be detected, or further process the plurality of defect areas to obtain the appearance defect detection result of the parts of the battery pack to be detected. For example, the computer device may screen the detected plurality of defect areas, and take the appearance defect of the component in the defect area satisfying the screening condition as the appearance defect detection result of the component of the battery pack to be detected.
In the embodiment of the application, the target image is the image acquired under the global visual angle, the region of interest is the battery pack region, the detection of the appearance defects of the parts on the battery pack to be detected is realized based on the first detection model, and the accuracy of the detection result is improved while the detection efficiency is improved.
In order to improve accuracy of the result, in one embodiment, as shown in fig. 13, S1220, determining a detection result of an appearance defect of a part of the battery pack to be inspected according to the defect areas, includes:
s1310, according to the confidence of each defect area, obtaining obvious defect areas and confusing defect areas in the defect areas.
The first detection model detects and obtains the defect areas in the battery pack area and outputs the confidence coefficient of each defect area. The higher the confidence of the defect area is, the greater the probability of representing that the defect area has the appearance defect of the corresponding part is; conversely, the lower the confidence of a defective area, the lower the probability of characterizing the presence of a corresponding part appearance defect in that defective area. The obvious defect area represents a defect area with larger appearance defect probability of the part, and the confusion defect area represents a defect area with smaller appearance defect probability of the part.
Optionally, after obtaining the plurality of defect regions and the confidence levels thereof in the battery pack region, the computer device may classify the plurality of defect regions according to the confidence levels of the defect regions to classify the plurality of defect regions into distinct defect regions and confusing defect regions. The computer device may determine the region type corresponding to the confidence coefficient of each defect region according to the preset correspondence between the confidence coefficient and the region type, where the region type includes an obvious defect region and a confusing defect region, so as to divide the plurality of defect regions into the region type to which the defect region belongs, and obtain the obvious defect region and the confusing defect region in the plurality of defect regions.
S1320, determining the appearance defect detection result of the part of the battery pack to be detected according to the obvious defect area and the confusion defect area.
Optionally, after obtaining the obvious defect areas and the confusing defect areas in the plurality of defect areas, the computer equipment can screen the obvious defect areas according to the confusing defect areas, and the appearance defects of the parts in the screened obvious defect areas are used as the appearance defect detection results of the parts of the battery pack to be detected. The computer equipment can also screen the confusing defect area, and the apparent defect area and the appearance defect of the part in the screened confusing defect area are used as the detection result of the appearance defect of the part of the battery pack to be detected.
In the embodiment of the application, the confidence coefficient of the defect area is utilized to classify a plurality of defect areas into the obvious defect area and the confusing defect area, so that the obvious defect area and the confusing defect area with different confidence coefficients are considered to comprehensively determine the appearance defect detection result of the part, thereby improving the accuracy of the result.
In practical application, the obvious defect areas and the confusing defect areas in the defect areas can be distinguished according to the magnitude relation between the confidence coefficient and the confidence coefficient threshold value. Based on this, in one embodiment, as shown in fig. 14, S1310, obtaining the distinct defect area and the confusing defect area in the plurality of defect areas according to the confidence of each defect area includes:
s1410, determining the defect area with the confidence degree larger than or equal to the confidence degree threshold value as the obvious defect area.
Alternatively, the computer device may compare the confidence of each defective region with a preset confidence threshold, and determine defective regions having a confidence greater than or equal to the confidence threshold as distinct defective regions.
S1420, determining the defect area with the confidence less than the confidence threshold as a confusing defect area.
Alternatively, the computer device may compare the confidence of each defective region with a preset confidence threshold, and determine defective regions having a confidence less than the confidence threshold as confusing defective regions.
In the embodiment of the application, the obvious defect area and the confusing defect area are distinguished by adopting a mode of comparing the confidence coefficient with the confidence coefficient threshold value, the classification process of the defect area is simplified, the classification time consumption is saved, and the detection efficiency is correspondingly improved.
The obtained confusing defect area may or may not be the defect area (i.e. non-defect area), so that the confusing defect area can be screened to filter the non-defect area. Based on this, in one embodiment, as shown in fig. 15, the determining the detection result of the appearance defect of the component of the battery pack to be inspected according to the obvious defect area and the confusing defect area at S1320 includes:
s1510, inputting the confusion defect area into a second detection model for screening to obtain a candidate confusion defect area; the second detection model is a model obtained by correcting the first detection model.
The second detection model is a model obtained by training the first detection model by using the confusion defect sample to correct parameters in the first detection model, and is used for screening the confusion defect area and filtering the non-defect area.
Optionally, the computer device may input each confusing defect region into the second detection model for screening, and filter out non-defect regions to obtain candidate confusing defect regions. Illustratively, the computer device inputs the confusing defect region 1-10 into the second detection model for screening, filters out the non-defect region to obtain the confusing defect region 1-5, and takes the confusing defect region 1-5 as the candidate confusing defect region 1-5.
S1520, determining the appearance defect detection result of the part of the battery pack to be detected according to the obvious defect area and the candidate confusion defect area.
Optionally, after the obvious defect area and the candidate confusion defect area are obtained, the computer equipment can directly use the appearance defects of the parts in the obvious defect area and the candidate confusion defect area as the detection result of the appearance defects of the parts of the battery pack to be detected, and can also screen the obvious defect area and the candidate confusion defect area so as to use the appearance defects of the parts in the defect area obtained after screening as the detection result of the appearance defects of the parts of the battery pack to be detected.
In the embodiment of the application, the confusion defect area is screened by using the second detection model, the non-defect area is filtered, and the reliability of the candidate confusion defect area obtained after screening as the area where the defect is located is improved, so that the reliability of the detection result of the appearance defect of the part is improved.
And the defect size verification can be carried out on the obvious defect area and the candidate confusing defect area to improve the reliability of the appearance defect detection result of the part. Therefore, in one embodiment, as shown in fig. 16, the determining the detection result of the appearance defect of the component of the battery pack to be inspected according to the obvious defect area and the candidate confusing defect area in S1520 includes:
s1610, obtaining the defect sizes of the obvious defect area and the candidate confusing defect area.
Wherein the defect size may be at least one of a length, a width, or an area of the defect area.
Optionally, after obtaining the obvious defect region and the candidate confusion defect region, the computer device may extract respective region boundaries of the obvious defect region and the candidate confusion defect region in the battery pack region, and obtain the defect size according to the respective region boundaries.
S1620, determining a target defect area in the obvious defect area and the candidate confusing defect area according to each defect size.
Optionally, after obtaining each defect size, the computer device may screen the distinct defect area and the candidate confusing defect area according to each defect size, so as to obtain a target defect area of which the defect size meets the defect requirement in the distinct defect area and the candidate confusing defect area. For example, the computer apparatus may compare each defect size with a preset reference size range, and determine a defect region of the obvious defect region and the candidate confusing defect region, the defect size of which satisfies the reference size range, as the target defect region.
S1630, determining the target defect area as a detection result of the appearance defect of the part of the battery pack to be detected.
Optionally, after the target defect area is obtained, the computer device may directly use the appearance defect of the part in the target defect area as the appearance defect detection result of the part of the battery pack to be detected.
In the embodiment of the application, the obvious defect area and the candidate confusion defect area are screened by verifying the two-dimensional defect sizes of the obvious defect area and the candidate confusion defect area, so that the reliability of the detection result of the appearance defect of the part is improved.
To improve accuracy of defect sizes, in one embodiment, as shown in fig. 17, the step S1610 of obtaining defect sizes of the obvious defect area and the candidate confusing defect area includes:
s1710, performing binarization processing on the obvious defect area and the candidate confusion defect area in the battery pack area to obtain an obvious defect mask image and a candidate defect mask image.
Optionally, the computer device may perform binarization processing on the distinct defect area and the candidate confusion defect area according to the gray value of each pixel point in the battery pack area, to obtain a Mask image (Mask) of the distinct defect area, that is, a distinct defect Mask image, and obtain a Mask image of the candidate confusion defect area, that is, a candidate defect Mask image. Illustratively, the apparent defect region and the candidate confusing defect region (hereinafter referred to as "defect region") in the battery pack region are darker, and the gray value is correspondingly smaller. The binarization processing process may be that the computer device selects the defect area by using the target frames, sets 1 for the gray value of the pixel point with the gray value greater than or equal to the gray threshold value in each target frame, and sets 0 for the gray value of the pixel point with the gray value less than the gray threshold value, so as to obtain the mask image of the defect area selected by each target frame.
S1720, taking the mask size of the obvious defect mask image as the defect size of the obvious defect area, and taking the mask size of the candidate defect mask image as the defect size of the candidate confusing defect area.
The mask size of the mask image is the size of the hollowed-out area in the mask image, and can be at least one of length, width or area.
Alternatively, after obtaining the obvious defect mask image and the candidate defect mask image, the computer device may extract the mask size of the obvious defect mask image as the defect size of the obvious defect region, and similarly extract the mask size of the candidate defect mask image as the defect size of the candidate defect region,
In the embodiment of the application, the defect size is determined by using the defect mask image obtained by binarization processing, and the binarization processing can obtain the finer and accurate region boundary of the defect region, thereby correspondingly improving the fineness and accuracy of the obtained defect size.
In the case where the target image is an image acquired under a local view angle and the region of interest is a character region, in one embodiment, as shown in fig. 18, S1120, determining a detection result of the defect according to the defect detection model and the region of interest includes:
S1810, inputting the character area into a third detection model; the third detection model represents a model for detecting character misregistration among the detection models.
Among them, character printing defects include various kinds. For example, it may include label/tag screen printing content that is not clear, there is ghost or overlap, etc. The third detection model may be one or more to detect the defective printing of all the characters.
S1820, determining a character misprinting detection result of the battery pack to be detected through the third detection model.
Optionally, the computer device inputs the character area into a third detection model, and detects different character dyscrasia in the character area through the third detection model to obtain a detection result of whether the character dyscrasia exists in the character area, and determines a character dyscrasia detection result of the battery pack to be detected.
In the embodiment of the application, the target image is the image acquired under the local visual angle, the interested area is the character area, the detection of poor character printing on the battery pack to be detected is realized based on the third detection model, and the accuracy of the detection result is improved while the detection efficiency is improved.
The third detection model may be used to detect the character content and the display state of the character area, based on which, in one embodiment, as shown in fig. 19, the determining, by the third detection model, the character misregistration detection result of the battery pack to be detected, includes:
S1910, detecting character content and display states of the character areas through a third detection model.
The character content is used for representing characters included in the character area, and the display state is used for representing the display effect of the characters in the character area. For example, the display state may include clear and blurred.
Optionally, the computer device inputs the character region into a third detection model, recognizes the characters in the character region through the third detection model, obtains the character content, and detects the display state of the obtained characters.
S1920, determining a content detection result from the character content, and determining a display detection result from the display state.
Optionally, after obtaining the character content of the character area, the computer device may directly use the character content as a content detection result, or may further process or determine the character content to obtain the content detection result. Similarly, after the display state in the character area is obtained, the computer equipment can directly take the display state as a display detection result, and can further process or judge the display state to obtain the display detection result.
S1930, determining the content detection result and the display detection result as the character printing defect detection result of the battery pack to be detected.
Optionally, the computer device may aggregate the content detection result and the display detection result, and may also comprehensively determine the character misprinting detection result of the battery pack to be detected according to the content detection result and the display detection result. For example, the computer device may determine that the character misprinting detection result of the battery pack to be detected is qualified if both the content detection result and the display detection result are qualified; and under the condition that the content detection result and the display detection result are unqualified, determining that the character printing defect detection result of the battery pack to be detected is unqualified.
In the embodiment of the application, the character area is not only checked for character content, but also the display state is detected, so that the detection of defects in multiple aspects of the appearance of the battery pack to be detected is realized, and the diversity and the comprehensiveness of the detection are improved.
The content detection result is used for representing whether the character content of the character area is matched with a pre-stored reference character or not. Therefore, in one embodiment, as shown in fig. 20, determining the content detection result according to the character content in S1920 includes:
And S2010, under the condition that the character content is matched with the reference character, determining that the content detection result is qualified.
The character content is matched with the reference character, and the content and the sequence of the characterization character content and the reference character are the same.
Optionally, after obtaining the character content of the character area, the computer device may obtain a pre-stored reference character, and match the character content with the reference character in terms of content and sequence, so as to determine that the character content matches with the reference character when determining that the character content is identical with the content and sequence of the reference character, and correspondingly determine that the content detection result is qualified.
And S2020, determining that the content detection result is unqualified under the condition that the character content is not matched with the reference character.
Wherein the character content is not matched with the reference character, and the content and/or sequence of the character content and the reference character are/is different.
Optionally, the computer device performs matching on the character content and the reference character in terms of content and sequence, so as to determine that the character content is not matched with the reference character when the character content is determined to be different from the content of the reference character or the sequence is different from the sequence, and accordingly determine that the content detection result is unqualified.
In the embodiment of the application, the character content is matched with the reference character to determine the content detection result, the process is simple and easy to realize, and the detection efficiency is correspondingly improved.
The third detection model includes a ghost detection unit and an overlap detection unit, and the display state of the character area includes whether ghost exists and whether overlap exists, respectively. Based on this, in one of the embodiments, as shown in fig. 21, detecting the display state in the character area by the third detection model in S1910 described above includes:
s2110, the character region input ghost detection unit determines the character ghost state.
The ghost detection unit is a sub-model for ghost detection in the third detection model. Character ghost states include the presence of ghost and the absence of ghost.
Alternatively, the computer apparatus may input the character region into the ghost detection unit to determine whether a character in the character region has a ghost, thereby obtaining a character ghost state of the character region.
S2120, inputting the character region into the overlap detection unit determines the character overlap state.
The overlap detection unit is a sub-model for overlap detection in the third detection model. The character overlap state includes the presence of overlap and the absence of overlap.
Alternatively, the computer apparatus may input the character region into the overlap detection unit to determine whether there is an overlap of characters in the character region, thereby obtaining a character overlap state of the character region.
S2130, the character ghost state and the character overlap state are determined as the display state of the character area.
Optionally, after obtaining the character ghost state and the character overlapping state of the character area, the computer device may summarize the character ghost state and the character overlapping state, and use the resultant state as the display state of the character area, or comprehensively determine the display state of the character area according to the character ghost state and the character overlapping state. For example, the computer device may determine that the display state of the character area is acceptable if the character ghost state is no ghost and the character overlap state is no overlap; and if the character ghost state is ghost or the character overlap state is overlap, determining that the display state of the character region is unqualified.
In the embodiment of the application, the character region is not only checked for the character ghost state, but also the character overlapping state is detected, so that the detection for the multi-aspect display state on the character region is realized, and the diversity and the comprehensiveness of the detection are improved.
In one embodiment, as shown in fig. 22, the present application further provides a defect detection method, which includes the following steps:
s2201, acquiring a to-be-detected image of a to-be-detected battery pack; the image to be detected comprises images collected under different visual angles;
S2202, determining a target image from images acquired at different viewing angles;
s2203, aiming at each defect, acquiring an interested region corresponding to the defect in the target image;
S2204, inputting a battery pack area into a first detection model to obtain a plurality of defect areas in the battery pack area under the condition that a target image is an image acquired under a global visual angle and the area of interest is the battery pack area; the first detection model represents a model for detecting the appearance defects of the parts in the defect detection model;
s2205, determining the defect area with the confidence coefficient larger than or equal to the confidence coefficient threshold value as an obvious defect area; determining a defect region with the confidence coefficient smaller than a confidence coefficient threshold value as a confusing defect region;
s2206, inputting the confusion defect area into a second detection model for screening to obtain a candidate confusion defect area; the second detection model is a model after the first detection model is corrected;
S2207, performing binarization processing on the obvious defect area and the candidate confusion defect area in the battery pack area to obtain an obvious defect mask image and a candidate defect mask image;
S2208, taking the mask size of the obvious defect mask image as the defect size of the obvious defect area, and taking the mask size of the candidate defect mask image as the defect size of the candidate confusing defect area;
s2209, determining a target defect area in the obvious defect area and the candidate confusing defect area according to the sizes of the defects;
S2210, determining the target defect area as a detection result of the appearance defect of the part of the battery pack to be detected.
As shown in fig. 23, the method further includes:
S2301, inputting a character area into a third detection model under the condition that the target image is an image acquired under a local visual angle and the region of interest is the character area; the third detection model represents a model for detecting character printing failure in the detection model and comprises a ghost detection unit and an overlapping detection unit;
S2302, inputting the character area into a ghost detection unit to determine the state of the character ghost;
s2303, inputting the character area into the overlap detection unit to determine the character overlap state;
s2304, determining the character ghost state and the character overlapping state as the display state of the character area;
s2305, detecting character content of the character area through a third detection model;
S2306, determining that the content detection result is qualified under the condition that the character content is matched with the reference character; under the condition that the character content is not matched with the reference character, determining that the content detection result is unqualified;
s2307, determining the content detection result and the display detection result as character printing defect detection results of the battery pack to be detected;
s2308, determining the detection result of the appearance defect of the part and the detection result of the character printing defect as the defect detection result of the battery pack to be detected.
As shown in fig. 24, the method further includes a process of training to obtain a defect detection model, specifically including the following steps:
S2401, aiming at each defect, acquiring a defect image and a reference image of the defect; defects exist in the defect image, and defects do not exist in the reference image;
s2402, acquiring a gray level difference value between a defect image of the defect and a reference image;
s2403, clustering gray level difference values which are larger than or equal to a first threshold value into a feature set;
S2404, clustering gray difference values smaller than a second threshold value into a feature set; the second threshold is less than the first threshold;
s2405, clustering gray difference values smaller than a first threshold and larger than or equal to a second threshold into a feature set;
S2406, correspondingly dividing defects in each feature set into a defect class;
S2407, aiming at each defect class, obtaining the defect obvious degree grade of the defect class;
S2408, determining that a target training mode corresponding to the defect class is an unsupervised training mode under the condition that the defect obvious degree grade of the defect class is high;
S2409, determining a target training mode corresponding to the defect class as a small sample training mode under the condition that the defect obvious degree grade of the defect class is a medium grade;
s2410, determining that a target training mode corresponding to the defect class is a supervised training mode under the condition that the defect obvious degree grade of the defect class is a low grade;
S2411, training a defect detection model in a target training mode to obtain a defect detection model corresponding to the defect class.
The specific process in the above steps can be referred to the relevant steps in the foregoing embodiments, and will not be repeated here.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
In one embodiment, as shown in fig. 25, there is provided a model training apparatus comprising: a feature acquisition module 2501, a defect classification module 2502, and a model training module 2503;
the feature acquisition module 2501 is configured to acquire defect feature information of a plurality of defects to be detected of the battery pack;
the defect classification module 2502 is configured to perform clustering according to defect feature information of each defect, and classify a plurality of defects into at least one defect class;
Model training module 2503 is configured to perform defect detection model training on each defect class by using different training modes, so as to obtain a defect detection model corresponding to each defect class; the training sample size required by different training modes is different; the defect characteristic information includes defect clarity level, and the higher the defect clarity level is, the less training sample is required.
In one embodiment, the feature acquisition module 2501 comprises:
An image acquisition sub-module for acquiring a defect image and a reference image of the defect for each defect; defects exist in the defect image, and defects do not exist in the reference image;
and the characteristic determining sub-module is used for determining defect characteristic information of the defects according to the defect images and the reference images.
In one embodiment, the feature determination submodule includes:
a difference determining subunit, configured to obtain a gray level difference value between the defect image of the defect and the reference image;
And the information determination subunit is used for determining the gray level difference value as defect characteristic information of the defect.
In one embodiment, defect partitioning module 2502 includes:
The clustering sub-module is used for carrying out clustering processing on defect characteristic information of each defect to obtain a plurality of characteristic sets;
and the dividing sub-module is used for correspondingly dividing the defects in each feature set into a defect class.
In one embodiment, the defect feature information includes a gray scale difference value; the clustering submodule comprises:
a first aggregation unit, configured to cluster gray level difference values greater than or equal to a first threshold value into a feature set;
a second aggregation unit, configured to cluster gray level difference values smaller than a second threshold value into a feature set; the second threshold is less than the first threshold;
And the third aggregation unit is used for clustering gray difference values which are smaller than the first threshold value and larger than or equal to the second threshold value into a feature set.
In one embodiment, model training module 2503 comprises:
the degree submodule is used for obtaining the defect obvious degree grade of each defect class;
The mode submodule is used for determining a target training mode corresponding to the defect class according to the defect obvious degree grade of the defect class;
And the training sub-module is used for training the defect detection model in a target training mode to obtain a defect detection model corresponding to the defect class.
In one embodiment, the mode submodule includes:
the first mode unit is used for determining that the target training mode corresponding to the defect class is an unsupervised training mode under the condition that the defect obvious degree grade of the defect class is high;
The second mode unit is used for determining that the target training mode corresponding to the defect class is a small sample training mode under the condition that the defect obvious degree grade of the defect class is a medium grade;
And the third mode unit is used for determining that the target training mode corresponding to the defect class is a supervised training mode under the condition that the defect obvious degree grade of the defect class is a low grade.
In one embodiment, as shown in fig. 26, there is provided a defect detecting apparatus including: an image acquisition module 2601 and a defect detection module 2602;
The image acquisition module 2601 is used for acquiring a to-be-detected image of the to-be-detected battery pack;
the defect detection module 2602 is configured to detect different defects of the image to be detected through a defect detection model, so as to obtain a defect detection result of the battery pack to be detected; the defect detection model is obtained by training by adopting any model training method.
In one embodiment, the image to be inspected comprises images acquired at different viewing angles; the defect detection module 2602 includes:
The target submodule is used for determining a target image from images acquired under different visual angles;
the detection sub-module is used for detecting different defects in the target image through the defect detection model to obtain detection results of the defects;
and the result submodule is used for determining the defect detection result of the battery pack to be detected according to the detection result of each defect.
In one embodiment, the detection submodule includes:
the region unit is used for acquiring a region of interest corresponding to the defect in the target image aiming at each defect;
and the result unit is used for determining the detection result of the defect according to the defect detection model and the region of interest.
In one embodiment, the target image is an image acquired under a global view angle, and the region of interest is a battery pack region; the result unit includes:
The first detection subunit is used for inputting the battery pack area into the first detection model to obtain a plurality of defect areas in the battery pack area; the first detection model represents a model for detecting the appearance defects of the parts in the defect detection model;
And the first result subunit is used for determining the appearance defect detection result of the part of the battery pack to be detected according to the defect areas.
In one embodiment, the first result subunit comprises:
The confidence coefficient micro unit is used for acquiring obvious defect areas and confusing defect areas in the plurality of defect areas according to the confidence coefficient of each defect area;
And the result micro unit is used for determining the appearance defect detection result of the part of the battery pack to be detected according to the obvious defect area and the confusing defect area.
In one embodiment, the confidence micro-unit comprises:
a first micro unit for determining a defective area with a confidence level greater than or equal to a confidence level threshold as a distinct defective area;
And a second micro-unit for determining a defective area with a confidence level less than a confidence level threshold as a confusing defective area.
In one embodiment, the results microcell comprises:
The screening micro unit is used for inputting the confusion defect area into the second detection model for screening to obtain a candidate confusion defect area; the second detection model is a model after the first detection model is corrected;
And the detection micro unit is used for determining the appearance defect detection result of the part of the battery pack to be detected according to the obvious defect area and the candidate confusion defect area.
In one embodiment, the detection micro-cell includes:
A size micro unit for obtaining defect sizes of the obvious defect area and the candidate confusing defect area;
A target micro unit for determining a target defect area in the obvious defect area and the candidate confusing defect area according to each defect size;
And the defect micro unit is used for determining the target defect area as a detection result of the appearance defect of the part of the battery pack to be detected.
In one embodiment, a size microcell includes:
the binarization micro unit is used for performing binarization processing on the obvious defect area and the candidate confusion defect area in the battery pack area to obtain an obvious defect mask image and a candidate defect mask image;
determining a micro unit, wherein the micro unit is used for taking the mask size of the obvious defect mask image as the defect size of the obvious defect area and taking the mask size of the candidate defect mask image as the defect size of the candidate confusing defect area.
In one embodiment, the target image is an image acquired under a local view angle, and the region of interest is a character region; the result unit includes:
a third detection subunit for inputting the character region into a third detection model; the third detection model represents a model for detecting poor printing of characters in the detection model;
And the third result subunit is used for determining the character misprinting detection result of the battery pack to be detected through the third detection model.
In one embodiment, the third result subunit comprises:
A first character micro unit for detecting character contents and display states of the character areas through a third detection model;
a second character micro unit for determining a content detection result according to the character content and determining a display detection result according to the display state;
and the third character micro unit is used for determining the content detection result and the display detection result as the character printing defect detection result of the battery pack to be detected.
In one embodiment, the second character micro-unit includes:
A qualified micro unit for determining that the content detection result is qualified when the character content is matched with the reference character;
And the disqualified micro unit is used for determining that the content detection result is disqualified under the condition that the character content is not matched with the reference character.
In one embodiment, the third detection model includes a ghost detection unit and an overlap detection unit; the first character micro-unit includes:
a first state micro unit for inputting the character area into the ghost detection unit to determine the character ghost state;
a second state micro unit for inputting the character area into the overlap detection unit to determine the character overlap state;
And a third state micro unit for determining the character ghost state and the character overlapping state as the display state of the character area.
The respective modules in the above-described model training apparatus and defect detection apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
obtaining defect characteristic information of various defects to be detected of the battery pack; clustering is carried out according to defect characteristic information of each defect, and a plurality of defects are divided into at least one defect class; performing defect detection model training on each defect class by adopting different training modes to obtain a defect detection model corresponding to each defect class; the training sample size required by different training modes is different; the defect characteristic information includes defect clarity level, and the higher the defect clarity level is, the less training sample is required.
In one embodiment, the processor when executing the computer program further performs the steps of:
Obtaining a defect image and a reference image of the defect aiming at each defect; defects exist in the defect image, and defects do not exist in the reference image; and determining defect characteristic information of the defects according to the defect image and the reference image.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring a gray level difference value between a defect image of the defect and a reference image; and determining the gray level difference value as defect characteristic information of the defect.
In one embodiment, the processor when executing the computer program further performs the steps of:
Clustering defect characteristic information of each defect to obtain a plurality of characteristic sets; and correspondingly dividing the defects in each feature set into a defect class.
In one embodiment, the defect feature information includes a gray scale difference value; the processor when executing the computer program also implements the steps of:
clustering gray scale difference values greater than or equal to a first threshold value into a feature set; clustering gray scale difference values smaller than a second threshold value into a feature set; the second threshold is less than the first threshold; the gray difference values smaller than the first threshold and larger than or equal to the second threshold are clustered into a feature set.
In one embodiment, the processor when executing the computer program further performs the steps of:
Aiming at each defect class, obtaining the defect obvious degree grade of the defect class; determining a target training mode corresponding to the defect class according to the defect obvious degree grade of the defect class; training the defect detection model in a target training mode to obtain a defect detection model corresponding to the defect class.
In one embodiment, the processor when executing the computer program further performs the steps of:
Under the condition that the defect obvious degree grade of the defect class is high, determining that a target training mode corresponding to the defect class is an unsupervised training mode; under the condition that the defect obvious degree grade of the defect class is a medium grade, determining that a target training mode corresponding to the defect class is a small sample training mode; and under the condition that the defect obvious degree grade of the defect class is a low grade, determining the target training mode corresponding to the defect class as a supervised training mode.
In one embodiment, the processor when executing the computer program further performs the steps of:
Acquiring a to-be-detected image of a to-be-detected battery pack; detecting different defects of the image to be detected through a defect detection model to obtain a defect detection result of the battery pack to be detected; the defect detection model is obtained by training by adopting any model training method.
In one embodiment, the image to be inspected comprises images acquired at different viewing angles; the processor when executing the computer program also implements the steps of:
Determining a target image from images acquired at different viewing angles; detecting different defects in the target image through a defect detection model to obtain detection results of the defects; and determining the defect detection result of the battery pack to be detected according to the detection result of each defect.
In one embodiment, the processor when executing the computer program further performs the steps of:
aiming at each defect, acquiring an interested region corresponding to the defect in the target image; and determining the detection result of the defect according to the defect detection model and the region of interest.
In one embodiment, the target image is an image acquired under a global view angle, and the region of interest is a battery pack region; the processor when executing the computer program also implements the steps of:
Inputting the battery pack area into a first detection model to obtain a plurality of defect areas in the battery pack area; the first detection model represents a model for detecting the appearance defects of the parts in the defect detection model; and determining the appearance defect detection result of the part of the battery pack to be detected according to the defect areas.
In one embodiment, the processor when executing the computer program further performs the steps of:
according to the confidence of each defect area, obtaining obvious defect areas and confusing defect areas in a plurality of defect areas; and determining the appearance defect detection result of the part of the battery pack to be detected according to the obvious defect area and the confusing defect area.
In one embodiment, the processor when executing the computer program further performs the steps of:
determining a defect area with the confidence coefficient greater than or equal to the confidence coefficient threshold value as an obvious defect area; and determining the defect area with the confidence coefficient smaller than the confidence coefficient threshold value as a confusing defect area.
In one embodiment, the processor when executing the computer program further performs the steps of:
inputting the confusion defect area into a second detection model for screening to obtain candidate confusion defect areas; the second detection model is a model after the first detection model is corrected; and determining the appearance defect detection result of the part of the battery pack to be detected according to the obvious defect area and the candidate confusion defect area.
In one embodiment, the processor when executing the computer program further performs the steps of:
Obtaining defect sizes of the obvious defect area and the candidate confusion defect area; determining a target defect area in the obvious defect area and the candidate confusing defect area according to each defect size; and determining the target defect area as a detection result of the appearance defect of the part of the battery pack to be detected.
In one embodiment, the processor when executing the computer program further performs the steps of:
Performing binarization processing on the obvious defect area and the candidate confusion defect area in the battery pack area to obtain an obvious defect mask image and a candidate defect mask image; the mask size of the obvious defect mask image is taken as the defect size of the obvious defect area, and the mask size of the candidate defect mask image is taken as the defect size of the candidate confusing defect area.
In one embodiment, the target image is an image acquired under a local view angle, and the region of interest is a character region; the processor when executing the computer program also implements the steps of:
inputting the character area into a third detection model; the third detection model represents a model for detecting poor printing of characters in the detection model; and determining a character printing defect detection result of the battery pack to be detected through a third detection model.
In one embodiment, the processor when executing the computer program further performs the steps of:
detecting character content and display state of the character area through a third detection model; determining a content detection result according to the character content, and determining a display detection result according to the display state; and determining the content detection result and the display detection result as character printing defect detection results of the battery pack to be detected.
In one embodiment, the processor when executing the computer program further performs the steps of:
Under the condition that the character content is matched with the reference character, determining that the content detection result is qualified; and under the condition that the character content is not matched with the reference character, determining that the content detection result is unqualified.
In one embodiment, the third detection model includes a ghost detection unit and an overlap detection unit; the processor when executing the computer program also implements the steps of:
Inputting the character area into a ghost detection unit to determine the state of the character ghost; inputting the character area into an overlap detection unit to determine the character overlap state; the character ghost state and the character overlap state are determined as the display state of the character region.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
obtaining defect characteristic information of various defects to be detected of the battery pack; clustering is carried out according to defect characteristic information of each defect, and a plurality of defects are divided into at least one defect class; performing defect detection model training on each defect class by adopting different training modes to obtain a defect detection model corresponding to each defect class; the training sample size required by different training modes is different; the defect characteristic information includes defect clarity level, and the higher the defect clarity level is, the less training sample is required.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Obtaining a defect image and a reference image of the defect aiming at each defect; defects exist in the defect image, and defects do not exist in the reference image; and determining defect characteristic information of the defects according to the defect image and the reference image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a gray level difference value between a defect image of the defect and a reference image; and determining the gray level difference value as defect characteristic information of the defect.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Clustering defect characteristic information of each defect to obtain a plurality of characteristic sets; and correspondingly dividing the defects in each feature set into a defect class.
In one embodiment, the defect feature information includes a gray scale difference value; the computer program when executed by the processor also performs the steps of:
clustering gray scale difference values greater than or equal to a first threshold value into a feature set; clustering gray scale difference values smaller than a second threshold value into a feature set; the second threshold is less than the first threshold; the gray difference values smaller than the first threshold and larger than or equal to the second threshold are clustered into a feature set.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Aiming at each defect class, obtaining the defect obvious degree grade of the defect class; determining a target training mode corresponding to the defect class according to the defect obvious degree grade of the defect class; training the defect detection model in a target training mode to obtain a defect detection model corresponding to the defect class.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Under the condition that the defect obvious degree grade of the defect class is high, determining that a target training mode corresponding to the defect class is an unsupervised training mode; under the condition that the defect obvious degree grade of the defect class is a medium grade, determining that a target training mode corresponding to the defect class is a small sample training mode; and under the condition that the defect obvious degree grade of the defect class is a low grade, determining the target training mode corresponding to the defect class as a supervised training mode.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Acquiring a to-be-detected image of a to-be-detected battery pack; detecting different defects of the image to be detected through a defect detection model to obtain a defect detection result of the battery pack to be detected; the defect detection model is obtained by training by adopting any model training method.
In one embodiment, the image to be inspected comprises images acquired at different viewing angles; the computer program when executed by the processor also performs the steps of:
Determining a target image from images acquired at different viewing angles; detecting different defects in the target image through a defect detection model to obtain detection results of the defects; and determining the defect detection result of the battery pack to be detected according to the detection result of each defect.
In one embodiment, the computer program when executed by the processor further performs the steps of:
aiming at each defect, acquiring an interested region corresponding to the defect in the target image; and determining the detection result of the defect according to the defect detection model and the region of interest.
In one embodiment, the target image is an image acquired under a global view angle, and the region of interest is a battery pack region; the computer program when executed by the processor also performs the steps of:
Inputting the battery pack area into a first detection model to obtain a plurality of defect areas in the battery pack area; the first detection model represents a model for detecting the appearance defects of the parts in the defect detection model; and determining the appearance defect detection result of the part of the battery pack to be detected according to the defect areas.
In one embodiment, the computer program when executed by the processor further performs the steps of:
according to the confidence of each defect area, obtaining obvious defect areas and confusing defect areas in a plurality of defect areas; and determining the appearance defect detection result of the part of the battery pack to be detected according to the obvious defect area and the confusing defect area.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a defect area with the confidence coefficient greater than or equal to the confidence coefficient threshold value as an obvious defect area; and determining the defect area with the confidence coefficient smaller than the confidence coefficient threshold value as a confusing defect area.
In one embodiment, the computer program when executed by the processor further performs the steps of:
inputting the confusion defect area into a second detection model for screening to obtain candidate confusion defect areas; the second detection model is a model after the first detection model is corrected; and determining the appearance defect detection result of the part of the battery pack to be detected according to the obvious defect area and the candidate confusion defect area.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Obtaining defect sizes of the obvious defect area and the candidate confusion defect area; determining a target defect area in the obvious defect area and the candidate confusing defect area according to each defect size; and determining the target defect area as a detection result of the appearance defect of the part of the battery pack to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Performing binarization processing on the obvious defect area and the candidate confusion defect area in the battery pack area to obtain an obvious defect mask image and a candidate defect mask image; the mask size of the obvious defect mask image is taken as the defect size of the obvious defect area, and the mask size of the candidate defect mask image is taken as the defect size of the candidate confusing defect area.
In one embodiment, the target image is an image acquired under a local view angle, and the region of interest is a character region; the computer program when executed by the processor also performs the steps of:
inputting the character area into a third detection model; the third detection model represents a model for detecting poor printing of characters in the detection model; and determining a character printing defect detection result of the battery pack to be detected through a third detection model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
detecting character content and display state of the character area through a third detection model; determining a content detection result according to the character content, and determining a display detection result according to the display state; and determining the content detection result and the display detection result as character printing defect detection results of the battery pack to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Under the condition that the character content is matched with the reference character, determining that the content detection result is qualified; and under the condition that the character content is not matched with the reference character, determining that the content detection result is unqualified.
In one embodiment, the third detection model includes a ghost detection unit and an overlap detection unit; the computer program when executed by the processor also performs the steps of:
Inputting the character area into a ghost detection unit to determine the state of the character ghost; inputting the character area into an overlap detection unit to determine the character overlap state; the character ghost state and the character overlap state are determined as the display state of the character region.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
obtaining defect characteristic information of various defects to be detected of the battery pack; clustering is carried out according to defect characteristic information of each defect, and a plurality of defects are divided into at least one defect class; performing defect detection model training on each defect class by adopting different training modes to obtain a defect detection model corresponding to each defect class; the training sample size required by different training modes is different; the defect characteristic information includes defect clarity level, and the higher the defect clarity level is, the less training sample is required.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Obtaining a defect image and a reference image of the defect aiming at each defect; defects exist in the defect image, and defects do not exist in the reference image; and determining defect characteristic information of the defects according to the defect image and the reference image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a gray level difference value between a defect image of the defect and a reference image; and determining the gray level difference value as defect characteristic information of the defect.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Clustering defect characteristic information of each defect to obtain a plurality of characteristic sets; and correspondingly dividing the defects in each feature set into a defect class.
In one embodiment, the defect feature information includes a gray scale difference value; the computer program when executed by the processor also performs the steps of:
clustering gray scale difference values greater than or equal to a first threshold value into a feature set; clustering gray scale difference values smaller than a second threshold value into a feature set; the second threshold is less than the first threshold; the gray difference values smaller than the first threshold and larger than or equal to the second threshold are clustered into a feature set.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Aiming at each defect class, obtaining the defect obvious degree grade of the defect class; determining a target training mode corresponding to the defect class according to the defect obvious degree grade of the defect class; training the defect detection model in a target training mode to obtain a defect detection model corresponding to the defect class.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Under the condition that the defect obvious degree grade of the defect class is high, determining that a target training mode corresponding to the defect class is an unsupervised training mode; under the condition that the defect obvious degree grade of the defect class is a medium grade, determining that a target training mode corresponding to the defect class is a small sample training mode; and under the condition that the defect obvious degree grade of the defect class is a low grade, determining the target training mode corresponding to the defect class as a supervised training mode.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Acquiring a to-be-detected image of a to-be-detected battery pack; detecting different defects of the image to be detected through a defect detection model to obtain a defect detection result of the battery pack to be detected; the defect detection model is obtained by training by adopting any model training method.
In one embodiment, the image to be inspected comprises images acquired at different viewing angles; the computer program when executed by the processor also performs the steps of:
Determining a target image from images acquired at different viewing angles; detecting different defects in the target image through a defect detection model to obtain detection results of the defects; and determining the defect detection result of the battery pack to be detected according to the detection result of each defect.
In one embodiment, the computer program when executed by the processor further performs the steps of:
aiming at each defect, acquiring an interested region corresponding to the defect in the target image; and determining the detection result of the defect according to the defect detection model and the region of interest.
In one embodiment, the target image is an image acquired under a global view angle, and the region of interest is a battery pack region; the computer program when executed by the processor also performs the steps of:
Inputting the battery pack area into a first detection model to obtain a plurality of defect areas in the battery pack area; the first detection model represents a model for detecting the appearance defects of the parts in the defect detection model; and determining the appearance defect detection result of the part of the battery pack to be detected according to the defect areas.
In one embodiment, the computer program when executed by the processor further performs the steps of:
according to the confidence of each defect area, obtaining obvious defect areas and confusing defect areas in a plurality of defect areas; and determining the appearance defect detection result of the part of the battery pack to be detected according to the obvious defect area and the confusing defect area.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a defect area with the confidence coefficient greater than or equal to the confidence coefficient threshold value as an obvious defect area; and determining the defect area with the confidence coefficient smaller than the confidence coefficient threshold value as a confusing defect area.
In one embodiment, the computer program when executed by the processor further performs the steps of:
inputting the confusion defect area into a second detection model for screening to obtain candidate confusion defect areas; the second detection model is a model after the first detection model is corrected; and determining the appearance defect detection result of the part of the battery pack to be detected according to the obvious defect area and the candidate confusion defect area.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Obtaining defect sizes of the obvious defect area and the candidate confusion defect area; determining a target defect area in the obvious defect area and the candidate confusing defect area according to each defect size; and determining the target defect area as a detection result of the appearance defect of the part of the battery pack to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Performing binarization processing on the obvious defect area and the candidate confusion defect area in the battery pack area to obtain an obvious defect mask image and a candidate defect mask image; the mask size of the obvious defect mask image is taken as the defect size of the obvious defect area, and the mask size of the candidate defect mask image is taken as the defect size of the candidate confusing defect area.
In one embodiment, the target image is an image acquired under a local view angle, and the region of interest is a character region; the computer program when executed by the processor also performs the steps of:
inputting the character area into a third detection model; the third detection model represents a model for detecting poor printing of characters in the detection model; and determining a character printing defect detection result of the battery pack to be detected through a third detection model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
detecting character content and display state of the character area through a third detection model; determining a content detection result according to the character content, and determining a display detection result according to the display state; and determining the content detection result and the display detection result as character printing defect detection results of the battery pack to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Under the condition that the character content is matched with the reference character, determining that the content detection result is qualified; and under the condition that the character content is not matched with the reference character, determining that the content detection result is unqualified.
In one embodiment, the third detection model includes a ghost detection unit and an overlap detection unit; the computer program when executed by the processor also performs the steps of:
Inputting the character area into a ghost detection unit to determine the state of the character ghost; inputting the character area into an overlap detection unit to determine the character overlap state; the character ghost state and the character overlap state are determined as the display state of the character region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (24)

1. A method of model training, the method comprising:
Obtaining defect characteristic information of various defects to be detected of the battery pack;
clustering is carried out according to defect characteristic information of each defect, and the defects are divided into at least one defect class;
Performing defect detection model training on each defect class by adopting different training modes to obtain a defect detection model corresponding to each defect class; the training sample size required by different training modes is different; the defect characteristic information comprises defect obviously degree grades, and the higher the defect obviously degree grade is, the less training sample quantity is required.
2. The method according to claim 1, wherein the acquiring defect characteristic information of a plurality of defects to be detected of the battery pack includes:
for each defect, acquiring a defect image and a reference image of the defect; the defect image has the defect, and the reference image does not have the defect;
And determining defect characteristic information of the defects according to the defect image and the reference image.
3. The method of claim 2, wherein determining defect characterization information for the defect from the defect image and the reference image comprises:
acquiring a gray level difference value between a defect image of the defect and a reference image;
and determining the gray level difference value as defect characteristic information of the defect.
4. A method according to any one of claims 1-3, wherein said clustering based on defect characteristic information of each of said defects, classifies said plurality of defects into at least one defect class, comprises:
clustering defect characteristic information of each defect to obtain a plurality of characteristic sets;
And correspondingly dividing the defects in each characteristic set into a defect class.
5. The method of claim 4, wherein the defect-characteristic information includes a gray-scale difference value; the clustering processing is performed on the defect characteristic information of each defect to obtain a plurality of characteristic sets, including:
clustering gray scale difference values greater than or equal to a first threshold value into a feature set;
clustering gray scale difference values smaller than a second threshold value into a feature set; the second threshold is less than the first threshold;
and clustering gray difference values smaller than the first threshold and larger than or equal to the second threshold into a feature set.
6. A method according to any one of claims 1 to 3, wherein performing defect detection model training on each defect class by using different training modes to obtain a defect detection model corresponding to each defect class comprises:
Aiming at each defect class, obtaining the defect obvious degree grade of the defect class;
determining a target training mode corresponding to the defect class according to the defect obvious degree grade of the defect class;
training a defect detection model in the target training mode to obtain a defect detection model corresponding to the defect class.
7. The method of claim 6, wherein determining the target training mode corresponding to the defect class according to the defect level of the defect class comprises:
under the condition that the defect obvious degree grade of the defect class is high, determining that a target training mode corresponding to the defect class is an unsupervised training mode;
Under the condition that the defect obvious degree grade of the defect class is a medium grade, determining that a target training mode corresponding to the defect class is a small sample training mode;
and under the condition that the defect obvious degree grade of the defect class is low, determining the target training mode corresponding to the defect class as a supervised training mode.
8. A method of defect detection, the method comprising:
acquiring a to-be-detected image of a to-be-detected battery pack;
Detecting different defects of the image to be detected through a defect detection model to obtain a defect detection result of the battery pack to be detected; the defect detection model is trained by the model training method according to any one of claims 1 to 7.
9. The method of claim 8, wherein the images to be inspected comprise images acquired at different viewing angles; and detecting different defects of the image to be detected through a defect detection model to obtain a defect detection result of the battery pack to be detected, wherein the defect detection result comprises the following steps:
Determining a target image from the images acquired at the different viewing angles;
detecting different defects in the target image through the defect detection model to obtain detection results of the defects;
and determining a defect detection result of the battery pack to be detected according to the detection result of each defect.
10. The method according to claim 9, wherein the detecting different defects in the target image by the defect detection model to obtain detection results of the defects includes:
Aiming at each defect, acquiring an interested region corresponding to the defect in the target image;
and determining the detection result of the defect according to the defect detection model and the region of interest.
11. The method of claim 10, wherein the target image is an image acquired at a global perspective and the region of interest is a battery pack region; the determining the detection result of the defect according to the defect detection model and the region of interest comprises the following steps:
inputting the battery pack area into a first detection model to obtain a plurality of defect areas in the battery pack area; the first detection model represents a model for detecting appearance defects of the parts in the defect detection model;
and determining the appearance defect detection result of the part of the battery pack to be detected according to the defect areas.
12. The method of claim 11, wherein determining the part appearance defect detection result of the battery pack to be inspected based on the plurality of defect areas comprises:
acquiring obvious defect areas and confusing defect areas in the defect areas according to the confidence degree of each defect area;
and determining the detection result of the appearance defect of the part of the battery pack to be detected according to the obvious defect area and the confusion defect area.
13. The method of claim 12, wherein the obtaining distinct defect regions and confusing defect regions of the plurality of defect regions based on the confidence level for each of the defect regions comprises:
Determining a defect region with the confidence degree greater than or equal to a confidence degree threshold as the obvious defect region;
And determining a defect area with the confidence coefficient smaller than the confidence coefficient threshold value as the confusing defect area.
14. The method of claim 12, wherein the determining the part appearance defect detection result of the battery pack to be inspected based on the apparent defect area and the confusing defect area comprises:
Inputting the confusion defect area into a second detection model for screening to obtain a candidate confusion defect area; the second detection model is a model after the first detection model is corrected;
And determining the detection result of the appearance defect of the part of the battery pack to be detected according to the obvious defect area and the candidate confusion defect area.
15. The method of claim 14, wherein the determining the part appearance defect detection result of the battery pack to be inspected based on the apparent defect region and the candidate confounding defect region comprises:
obtaining defect sizes of the obvious defect area and the candidate confusing defect area;
determining a target defect region from among the distinct defect region and the candidate confusing defect region according to each of the defect sizes;
and determining the target defect area as a detection result of the appearance defect of the part of the battery pack to be detected.
16. The method of claim 15, wherein the obtaining the defect sizes of the distinct defect region and the candidate confusing defect region comprises:
Performing binarization processing on the obvious defect area and the candidate confusion defect area in the battery pack area to obtain an obvious defect mask image and a candidate defect mask image;
and taking the mask size of the obvious defect mask image as the defect size of the obvious defect area, and taking the mask size of the candidate defect mask image as the defect size of the candidate confusing defect area.
17. The method of claim 10, wherein the target image is an image acquired at a local view angle and the region of interest is a character region; the determining the detection result of the defect according to the detection model and the region of interest comprises the following steps:
Inputting the character area into a third detection model; the third detection model represents a model for detecting character misregistration in the detection model;
And determining a character printing defect detection result of the battery pack to be detected through the third detection model.
18. The method of claim 17, wherein the determining, by the third detection model, a character-print failure detection result of the battery pack to be inspected, comprises:
detecting character content and display state of the character area through the third detection model;
Determining a content detection result according to the character content, and determining a display detection result according to the display state;
and determining the content detection result and the display detection result as character printing defect detection results of the battery pack to be detected.
19. The method of claim 18, wherein said determining content detection results from said character content comprises:
under the condition that the character content is matched with a reference character, determining that the content detection result is qualified;
and under the condition that the character content is not matched with the reference character, determining that the content detection result is unqualified.
20. The method according to claim 18, wherein the third detection model includes a ghost detection unit and an overlap detection unit; detecting, by the third detection model, a display state of the character region, including:
inputting the character area into the ghost detection unit to determine a character ghost state;
inputting the character region into the overlap detection unit to determine a character overlap state;
and determining the character ghost state and the character overlapping state as the display state of the character area.
21. A model training apparatus, the apparatus comprising:
the characteristic acquisition module is used for acquiring defect characteristic information of various defects to be detected of the battery pack;
The defect dividing module is used for carrying out clustering processing according to defect characteristic information of each defect and dividing the plurality of defects into at least one defect class;
The model training module is used for training the defect detection model by adopting different training modes for each defect class to obtain a defect detection model corresponding to each defect class; the training sample size required by different training modes is different; the defect characteristic information comprises defect obviously degree grades, and the higher the defect obviously degree grade is, the less training sample quantity is required.
22. A defect detection apparatus, the apparatus comprising:
the image acquisition module is used for acquiring a to-be-detected image of the to-be-detected battery pack;
The defect detection module is used for detecting different defects of the image to be detected through a defect detection model to obtain a defect detection result of the battery pack to be detected; the defect detection model is trained by the model training method according to any one of claims 1 to 7.
23. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 20 when the computer program is executed.
24. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 20.
CN202410330834.2A 2024-03-22 2024-03-22 Model training method, defect detection device, model training equipment and storage medium Pending CN117934470A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410330834.2A CN117934470A (en) 2024-03-22 2024-03-22 Model training method, defect detection device, model training equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410330834.2A CN117934470A (en) 2024-03-22 2024-03-22 Model training method, defect detection device, model training equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117934470A true CN117934470A (en) 2024-04-26

Family

ID=90761238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410330834.2A Pending CN117934470A (en) 2024-03-22 2024-03-22 Model training method, defect detection device, model training equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117934470A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154498A (en) * 2017-12-06 2018-06-12 深圳市智能机器人研究院 A kind of rift defect detecting system and its implementation
CN110875996A (en) * 2020-01-17 2020-03-10 中体彩印务技术有限公司 Printing overprinting monitoring system and monitoring method thereof
CN111861966A (en) * 2019-04-18 2020-10-30 杭州海康威视数字技术股份有限公司 Model training method and device and defect detection method and device
CN114565022A (en) * 2022-01-24 2022-05-31 苏州富鑫林光电科技有限公司 Rapid real-time online deep learning system and training method
KR102450965B1 (en) * 2022-06-30 2022-10-06 주식회사 아이브 Apparatus and method for automated training learning model for fault detection based on artificial intelligence
CN117237340A (en) * 2023-11-10 2023-12-15 江西省中鼐科技服务有限公司 Method and system for detecting appearance of mobile phone shell based on artificial intelligence
WO2024007602A1 (en) * 2022-07-06 2024-01-11 中国华能集团清洁能源技术研究院有限公司 Method and apparatus for training cyclegan model for generating defect images of photovoltaic panel
CN117454375A (en) * 2022-07-13 2024-01-26 北京观成科技有限公司 Malicious encryption traffic identification model training method and device and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154498A (en) * 2017-12-06 2018-06-12 深圳市智能机器人研究院 A kind of rift defect detecting system and its implementation
CN111861966A (en) * 2019-04-18 2020-10-30 杭州海康威视数字技术股份有限公司 Model training method and device and defect detection method and device
CN110875996A (en) * 2020-01-17 2020-03-10 中体彩印务技术有限公司 Printing overprinting monitoring system and monitoring method thereof
CN114565022A (en) * 2022-01-24 2022-05-31 苏州富鑫林光电科技有限公司 Rapid real-time online deep learning system and training method
KR102450965B1 (en) * 2022-06-30 2022-10-06 주식회사 아이브 Apparatus and method for automated training learning model for fault detection based on artificial intelligence
WO2024007602A1 (en) * 2022-07-06 2024-01-11 中国华能集团清洁能源技术研究院有限公司 Method and apparatus for training cyclegan model for generating defect images of photovoltaic panel
CN117454375A (en) * 2022-07-13 2024-01-26 北京观成科技有限公司 Malicious encryption traffic identification model training method and device and electronic equipment
CN117237340A (en) * 2023-11-10 2023-12-15 江西省中鼐科技服务有限公司 Method and system for detecting appearance of mobile phone shell based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN110276754B (en) Surface defect detection method, terminal device and storage medium
CN105320705B (en) The search method and device of similar vehicle
CN109523518B (en) Tire X-ray defect detection method
US11699224B2 (en) Neural network training device, system and method
CN111382623A (en) Live broadcast auditing method, device, server and storage medium
CN108764361B (en) Working condition identification method of indicator diagram of beam-pumping unit based on integrated learning
CN116188475B (en) Intelligent control method, system and medium for automatic optical detection of appearance defects
CN111179263B (en) Industrial image surface defect detection model, method, system and device
CN111406270B (en) Image-based counterfeit detection
CN111583180B (en) Image tampering identification method and device, computer equipment and storage medium
Marques et al. Automatic road pavement crack detection using SVM
CN103295024A (en) Method and device for classification and object detection and image shoot and process equipment
CN114926441A (en) Defect detection method and system for machining and molding injection molding part
CN117173172B (en) Machine vision-based silica gel molding effect detection method and system
CN115131283A (en) Defect detection and model training method, device, equipment and medium for target object
CN115879017A (en) Automatic classification and grading method and device for power sensitive data and storage medium
CN112288760A (en) Adherent cell image screening method and system and cell image analysis method
CN113537414A (en) Lithium battery defect detection method, device, equipment and storage medium
CN115302963A (en) Bar code printing control method, system and medium based on machine vision
CN112016756A (en) Data prediction method and device
CN117197591B (en) Data classification method based on machine learning
CN111368865A (en) Method and device for detecting remote sensing image oil storage tank, readable storage medium and equipment
CN117934470A (en) Model training method, defect detection device, model training equipment and storage medium
CN112200789A (en) Image identification method and device, electronic equipment and storage medium
Raiser et al. Impact of object extraction methods on classification performance in surface inspection systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination