CN113705676A - Classification model updating method and device, electronic equipment and storage medium - Google Patents

Classification model updating method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113705676A
CN113705676A CN202110997980.7A CN202110997980A CN113705676A CN 113705676 A CN113705676 A CN 113705676A CN 202110997980 A CN202110997980 A CN 202110997980A CN 113705676 A CN113705676 A CN 113705676A
Authority
CN
China
Prior art keywords
category
target
classification model
registered
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110997980.7A
Other languages
Chinese (zh)
Inventor
秦永强
刘金露
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ALNNOVATION (BEIJING) TECHNOLOGY Co.,Ltd.
Original Assignee
Qingdao Chuangxin Qizhi Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Chuangxin Qizhi Technology Group Co ltd filed Critical Qingdao Chuangxin Qizhi Technology Group Co ltd
Priority to CN202110997980.7A priority Critical patent/CN113705676A/en
Publication of CN113705676A publication Critical patent/CN113705676A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Abstract

The application provides a classification model updating method and device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring a new category registration request; wherein the new category registration request contains a label and a sample image of a target category; judging whether the label of the target category is the same as the label of any registered category; if not, determining whether a feature extractor of a classification model can distinguish the target class from the registered class according to the sample image of the target class and the sample image of the registered class; if so, adding the target class to a classification task of a classifier in the classification model. According to the scheme, the classification model is directly updated by adding classification tasks for the classifier of the classification model, the training process of the feature extractor is omitted, a large amount of sample data does not need to be collected, the updating process of the classification model is greatly simplified, and time cost and labor cost are saved.

Description

Classification model updating method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for updating a classification model, an electronic device, and a computer-readable storage medium.
Background
In the industrial production process, the surface flaw quality inspection of products such as screens, parts, cloth and the like is an essential link. In the related art, after a surface image of a product is acquired, the surface image may be processed through a defect classification model, so as to determine whether the surface image has a defect and the type information of the defect. However, the flaw classification model is often trained by a large amount of sample data. In the actual production process, sample data is difficult to collect and limited in quantity, and after the sample data is collected, a condition of wrong labeling may exist, so that the difficulty of training a flaw classification model with high accuracy is very high. In the application process of the flaw classification model, new flaws may appear on the surface of the product, and at this time, sample data needs to be collected again, and the model needs to be trained again. The process of retraining consumes a significant amount of time and labor cost.
Disclosure of Invention
An object of the embodiment of the present application is to provide a method and an apparatus for updating a classification model, an electronic device, and a computer-readable storage medium, which are used for quickly updating a classification model.
In one aspect, the present application provides a method for updating a classification model, including:
acquiring a new category registration request; wherein the new category registration request contains a label and a sample image of a target category;
judging whether the label of the target category is the same as the label of any registered category;
if not, determining whether a feature extractor of a classification model can distinguish the target class from the registered class according to the sample image of the target class and the sample image of the registered class;
if so, adding the target class to a classification task of a classifier in the classification model.
In an embodiment, the method further comprises:
if the label of the target category is the same as that of any registered category, taking the registered category with the same label as the target category as a specified category;
and determining whether the classification model needs to be updated according to the sample image of the target class and the sample image of the specified class.
In an embodiment, the determining whether the classification model needs to be updated according to the sample image of the target class and the sample image of the specified class includes:
extracting, by the feature extractor, image features of the sample image of the target category and the sample image of the specified category;
judging whether the similarity of the image features between the target category and the specified category reaches a preset similarity threshold value;
if so, it is determined that the classification model does not need to be updated.
In an embodiment, the method further comprises:
if not, acquiring a new label of the target category;
training the classification model based on the sample image and the new label of the target class and the sample image and the label of the registered class; wherein the classifier of the classification model is augmented with the classification task of the target class prior to training.
In an embodiment, the determining whether a feature extractor of a classification model can distinguish the target class from the registered class according to the sample image of the target class and the sample image of the registered class includes:
extracting, by the feature extractor, image features of the sample image of the target category and the sample image of each registered category;
judging whether the similarity of the image features between the target category and each registered category reaches a preset similarity threshold value or not;
if the similarity of the image features between the target category and all the registered categories does not reach the similarity threshold, determining that the feature extractor can distinguish the target category from the registered categories;
and if the similarity of the image features between the target category and any registered category reaches the similarity threshold, determining that the feature extractor cannot distinguish the target category from the registered category.
In an embodiment, the method further comprises:
training the classification model based on the sample image and the label of the target class and the sample image and the label of the registered class if the feature extractor cannot distinguish the target class from the registered class; wherein the classifier of the classification model is augmented with the classification task of the target class prior to training.
In an embodiment, the method further comprises:
classifying the plurality of test images through the classification model, and outputting a classification result of each test image;
screening out the test images with wrong classification results as misclassified images in response to the screening instructions for the plurality of test images;
the classification model is updated according to a plurality of misclassified images.
In an embodiment, the updating the classification model according to the plurality of misclassified images includes:
determining a number of new classes of sample images from the plurality of misclassified images;
and acquiring the label of each new category, and initiating a new category registration request according to the label of each new category and the sample image.
In another aspect, the present application further provides an electronic device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the above-described method of updating the classification model.
Further, the present application also provides a computer-readable storage medium storing a computer program, which is executable by a processor to perform the above-mentioned method for updating a classification model.
According to the scheme, after the new category registration request is obtained, whether the label of the target category in the new category registration request is the same as that of any registered category is judged, when the new category is determined to be different from the registered category, whether the target category and the registered category can be distinguished by the feature extractor of the classification model can be determined, and under the condition that the target category and the registered category can be distinguished, the classification model is directly updated by adding a classification task to the classifier of the classification model, so that the training process of the feature extractor is omitted, a large amount of sample data does not need to be collected, the updating process of the classification model is greatly simplified, and time cost and labor cost are saved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic view of an application scenario of a method for updating a classification model according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for updating a classification model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a feature distribution provided in an embodiment of the present application;
fig. 5 is a schematic flowchart of a method for determining a model update task according to an embodiment of the present application;
fig. 6 is a flowchart illustrating a method for determining a classification capability of a model according to an embodiment of the present application;
fig. 7 is a schematic flowchart of a model debugging method according to an embodiment of the present application;
fig. 8 is a block diagram of an apparatus for updating a classification model according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Fig. 1 is a schematic view of an application scenario of an update method of a classification model according to an embodiment of the present application. As shown in fig. 1, the application scenario includes a client 20 and a server 30; the client 20 may be a server, a server cluster or a cloud computing center, and the client 20 is in butt joint with a user terminal and is used for forwarding a new category registration request issued by the user terminal, and generating and sending the new category registration request; the server 30 may be a server, a server cluster, or a cloud computing center, and may update the classification model in response to a new class registration request.
As shown in fig. 2, the present embodiment provides an electronic apparatus 1 including: at least one processor 11 and a memory 12, one processor 11 being exemplified in fig. 2. The processor 11 and the memory 12 are connected by a bus 10, and the memory 12 stores instructions executable by the processor 11, and the instructions are executed by the processor 11, so that the electronic device 1 can execute all or part of the flow of the method in the embodiments described below. In an embodiment, the electronic device 1 may be the server 30 described above, and is configured to perform the method for updating the classification model.
The Memory 12 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk.
The present application also provides a computer readable storage medium storing a computer program executable by the processor 11 to perform the method for updating a classification model provided herein.
Referring to fig. 3, a flowchart of a method for updating a classification model according to an embodiment of the present application is shown, and as shown in fig. 3, the method may include the following steps 310 to 340.
Step 310: acquiring a new category registration request; wherein the new category registration request includes a label and a sample image of the target category.
The new category registration request is used to register a new category to update the classification model so that the classification model can classify the new category. For example, the classification model is used for classifying defects on the surface of a product, and a new category registration request can be generated for the defects of the new category if the defects of the new category appear in the production process. The new category registration request may include labels and sample images for new category defects.
The server may obtain a new category registration request from a client interfacing with the user terminal, and parse the tag and the sample image of the target category from the new category registration request. Here, the target category is a new category to be registered; the label may be text information indicating a specific meaning of the target category, and may be "crack" for example; the sample image is an image containing an object corresponding to a target class, illustratively, the classification model is used for classifying the defects on the screen surface, and when the target class is the speckles on the screen surface, the sample image is an image of the screen with the speckles.
Step 320: it is determined whether the label of the target category is the same as the label of any of the registered categories.
The registered category is a category that has already been registered, and the classification model can classify the registered category. The server may record labels of all registered categories, for example, the classification model may classify cracks, spots, and scratches on the screen surface, and then the server may record labels "cracks", "spots", and "scratches" of the registered categories.
The server may compare the tag of the target category with the tag of the registered category, and determine whether the tag of the registered category identical to the tag of the target category exists. In one aspect, if there is no tag of any registered category that is the same as the tag of the target category, the following step 330 may be continued. On the other hand, if there is any label of the registered category identical to that of the target category, it may be continuously checked whether the target category is already registered, in other words, whether the classification model is already capable of classifying the target category, as described in detail below.
Step 330: if not, determining whether the feature extractor of the classification model can distinguish the target class from the registered class according to the sample image of the target class and the sample image of the registered class.
The classification model comprises a feature extractor and a classifier, wherein the feature extractor is used for extracting image features from an image, and the image features can be feature images or feature vectors; the classifier is used for determining the class of the object in the image according to the image characteristics.
When the labels of all the registered classes are not equal to the labels of the target class, the server side can determine whether the feature extractor of the classification model can distinguish the target class from the registered classes according to the sample images of the target class and the sample images of all the registered classes.
The server can respectively extract the image characteristics of the sample images of the target category and the sample images of all the registered categories according to the characteristic extractor, and determine whether the characteristic extractor can distinguish the target category from the registered categories according to the respective conditions of the image characteristics.
Fig. 4 is a schematic diagram of feature distribution according to an embodiment of the present application, and as shown in fig. 4, points with the same color represent image features extracted from images of the same category. The image features of all the categories on the left side can be basically distinguished after clustering, which shows that the feature extractor for extracting the image features can effectively distinguish the images of different categories; the image features of the right categories cannot be clearly distinguished after clustering, which means that the feature extractor for extracting the image features cannot distinguish images of different categories.
In one aspect, if the feature extractor can distinguish between the target class and other registered classes, execution may proceed to step 340. On the other hand, if the feature extractor cannot distinguish between the target class and other registered classes, the entire classification model may be trained, as detailed in the afternoon related description.
Step 340: if so, the target class is added to the classification task of the classifier in the classification model.
Under the condition that the feature extractor can distinguish the target class from other registered classes, the feature extractor does not need to be trained again, and the server side can add the target class to a classification task of the classifier in the classification model. The server can add an output to the classifier, so that the classifier can judge the target class, and a classification task corresponding to the target class is added. At this point, the target class completes registration.
For example, the classifier may classify the registered categories "spot", "crack", "scratch", and "no defect" before updating, there are outputs corresponding to the above four categories, and when it is determined that the feature extractor can distinguish the target category "dirty", the output corresponding to "dirty" is directly added to the classifier without training the feature extractor, so that the classification model can classify "spot", "crack", "scratch", "no defect", and "dirty".
By the aid of the measures, when the new category is determined to be different from the registered category and the feature extractor of the classification model can distinguish the new category from the registered category, the classification model is updated directly by adding classification tasks to the classifier of the classification model, a training process of the feature extractor is omitted, and updating work of the classification model is greatly simplified.
In an embodiment, after the server performs step 320, if it is determined that the tag of the target category is the same as the tag of any registered category, the registered category having the same tag as the target category may be taken as the designated category. For example, the server determines that the target category has the same label "crack" as one registered category, and may take the registered category labeled "crack" as the designated category.
The server side can determine whether the classification model needs to be updated according to the sample image of the target class and the sample image of the specified class. The server side can determine whether the target class and the specified class belong to the same class according to the sample image of the target class and the sample image of the specified class. On one hand, if the two are in the same category, the classification model does not need to be updated; on the other hand, if the two belong to different classes, the classification model needs to be updated.
In an embodiment, referring to fig. 5, a flowchart of a method for determining a model update task provided in an embodiment of the present application is shown in fig. 5, and as shown in steps 510 to 530, it is determined whether to update a classification model when there is a specified class having the same label as a target class.
Step 510: and extracting the image characteristics of the sample image of the target category and the sample image of the specified category by the characteristic extractor.
The server can input the sample image of the target category into the feature extractor of the classification model, and the image features of the sample image are extracted through the feature extractor, so that the image features corresponding to the target category are obtained. The server can input the sample image of the specified category into the feature extractor of the classification model, and the image features of the sample image are extracted through the feature extractor, so that the image features corresponding to the specified category are obtained.
If a single sample image of a target category or a specified category is input to the feature extractor, the image features extracted from the single sample image may be used as the image features of the target category or the specified category. If a plurality of sample images of the target category or the designated category are input into the feature extractor, the image features corresponding to the plurality of sample images can be averaged, and the image features obtained by processing can be used as the image features of the target category or the designated category.
Step 520: and judging whether the similarity of the image characteristics between the target category and the specified category reaches a preset similarity threshold value.
After obtaining the image features corresponding to the target category and the image features corresponding to the designated category, the server may calculate the similarity of the image features between the target category and the designated category. Here, the similarity may be expressed by an euclidean distance, a cosine similarity, or the like.
The server can determine whether the calculated similarity reaches a similarity threshold. The similarity threshold may be an empirical value used to evaluate whether two image features correspond to the same category. Illustratively, if the similarity is expressed in terms of euclidean distance, it may be determined whether the similarity is less than a similarity threshold, and if so, it is determined that the similarity threshold is reached. If the similarity is represented by cosine similarity, it may be determined whether the similarity is greater than a similarity threshold, and if so, it is determined that the similarity threshold is reached.
Step 530: if so, it is determined that the classification model need not be updated.
When the similarity of the image features between the target class and the specified class reaches a similarity threshold, it may be determined that the target class and the specified class belong to the same class, in other words, the target class is already registered, and the classification model may implement classification of the target class. In this case, the classification model need not be updated.
In an embodiment, after performing step 520, when the similarity of the image features between the target category and the specified category does not reach the similarity threshold, the server may determine that the target category and the specified category belong to different categories. For example, the target class and the designated class have the same label "crack", and when it is determined that the similarity of the image features between the target class and the designated class does not reach the similarity threshold, a crack in the sample image of the target class and a crack in the sample image of the designated class are morphologically different, and at this time, the designated class and the target class may be considered not to belong to the same crack but to belong to different classes.
In this case, the server may obtain a new tag for the target category. And the server side can output first prompt information to the user terminal when determining that the target category is different from the specified category, wherein the first prompt information is used for prompting the user terminal to repeat the label and input a new label again. When the user inputs a new label again, the server can obtain the new label of the target category after the user terminal reports the new label.
The server can train the classification model based on the sample image and the new label of the target class and the sample image and the new label of the registered class. Wherein the classifier of the classification model is added with the classification task of the target class before training.
Before training the classification model, the server may add an output to the classifier, thereby adding a classification task corresponding to the target class. After the classifier is adjusted, the server checks whether the number of sample images of the target class reaches a preset sample number threshold. In one case, the threshold number of samples is reached and training can be performed directly through the sample images. In another case, the sample number threshold is not reached, and the server may process the sample image by data augmentation means such as horizontal/vertical flipping, rotation, scaling, cropping, translation, contrast adjustment, noise addition, color dithering, and the like, thereby increasing the number of sample images.
After obtaining a sufficient number of sample data of the target category, the server may add a numerical label to the sample image of the target category and the sample image of each registered category, where the numerical label is used for training. Illustratively, there are 7 categories to be classified, and the numerical label may be a vector containing 7 elements, each element representing the confidence of the corresponding category. The server side can determine the numerical value label of the sample image corresponding to each category according to the label of each category, and adds the numerical value label to the sample image.
The server can train the classification model of the adjusting classifier by adding the sample images of all classes with the numerical labels, so that the classification model is updated.
In an initial situation, the classification model can be obtained by performing self-supervised pre-training on the source data set, so that the feature extractor with good performance is obtained under the condition that the sample image is insufficient.
In an embodiment, referring to fig. 6, a flowchart of a method for determining a model classification capability according to an embodiment of the present application is shown in fig. 6, where it is determined that a label of a target class is different from labels of all registered classes, it is determined whether a classification model can distinguish the target class from the registered classes through the following steps 331 to 334.
Step 331: the sample image of the target category and the image features of the sample image of each registered category are extracted by a feature extractor.
The server can input the sample image of the target category into the feature extractor of the classification model, and the image features of the sample image are extracted through the feature extractor, so that the image features corresponding to the target category are obtained. And the server respectively inputs the sample images of each registered category into a feature extractor of the classification model, and extracts the sample images of each registered category through the feature extractor so as to obtain the image features corresponding to each registered category.
If a single sample image of a target category or a registered category is input to the feature extractor, the image features extracted from the single sample image may be used as the image features of the target category or the registered category. If a plurality of sample images of the target category or the registered category are input into the feature extractor, the image features corresponding to the plurality of sample images can be averaged, and the image features obtained by processing can be used as the image features of the target category or the registered category.
Step 332: and judging whether the similarity of the image characteristics between the target category and each registered category reaches a preset similarity threshold value.
After obtaining the image features corresponding to the target category and the image features corresponding to each registered category, the server may calculate a similarity of the image features between the target category and each registered category. Here, the similarity is expressed by an euclidean distance, a cosine similarity, or the like.
For the target category and each registered category, the server may determine whether the calculated similarity reaches a similarity threshold.
Step 333: and if the similarity of the image features between the target category and all the registered categories does not reach the similarity threshold value, determining that the feature extractor can distinguish the target category from the registered categories.
When the similarity of the image features between the target category and all the registered categories does not reach the similarity threshold, it can be determined that the image features of the target category and the image features of all the registered categories are greatly different, and in this case, the feature extractor can distinguish the target category from the registered categories.
Step 334: and if the similarity of the image features between the target category and any registered category reaches a similarity threshold, determining that the feature extractor cannot distinguish the target category from the registered categories.
When the similarity of the image features between the target category and at least one registered category reaches a similarity threshold, it may be determined that the image features of the target category are relatively similar to the image features of the registered category, in which case the feature extractor may not distinguish between the target category and the registered category.
By the above measures, it can be determined whether the feature extractor can distinguish the target class from the registered class, thereby determining whether the feature extractor needs to be trained when updating the classification model.
In an embodiment, when it is determined that the feature extractor cannot distinguish between the target class and the registered class, the server may train the classification model based on the sample image and the label of the target class and the sample image and the label of the registered class. Wherein the classifier of the classification model is added with the classification task of the target class before training.
Before training the classification model, the server may add an output to the classifier, thereby adding a classification task corresponding to the target class. After the classifier is adjusted, the server checks whether the number of sample images of the target class reaches a preset sample number threshold. In one case, the threshold number of samples is reached and training can be performed directly through the sample images. In another case, the sample number threshold is not reached, and the server may process the sample image by data augmentation means such as horizontal/vertical flipping, rotation, scaling, cropping, translation, contrast adjustment, noise addition, color dithering, and the like, thereby increasing the number of sample images.
After obtaining a sufficient number of sample data of the target category, the server may add a numerical label to the sample image of the target category and the sample image of each registered category, where the numerical label is used for training.
The server can train the classification model of the adjusting classifier by adding the sample images of all classes with the numerical labels, so that the classification model is updated.
In an initial situation, the classification model can be obtained by performing self-supervised pre-training on the source data set, so that the feature extractor with good performance is obtained under the condition that the sample image is insufficient.
In an embodiment, referring to fig. 7, a flowchart of a model debugging method provided for an embodiment of the present application is shown in fig. 7, and after obtaining a classification model, before applying the classification model, performance of the classification model may be verified through the following steps 710 to 730.
Step 710: and classifying the plurality of test images through the classification model, and outputting the classification result of each test image.
Wherein the test image is an image used for classification in a debugging process. For example, the classification model is used for classifying defects on the surface of a product, and an image of the surface of the product in a production process can be collected to serve as a test image.
The server can input a plurality of test images into the trained classification model, so as to obtain the classification result of each test image. After obtaining the classification result, the server may output the test image and the classification result of the tested image to the user terminal, so that the user may check whether the classification result is correct. By evaluating the classification results of the plurality of test images, the plurality of test images with wrong classification results can be determined, and a screening instruction is sent to the server side through the user terminal. Wherein the screening instruction is used for indicating the test image with the wrong classification result.
Step 720: and responding to the screening instructions aiming at the plurality of test images, and screening out the test images with wrong classification results as misclassified images.
The server can receive the screening instruction, respond to the screening instruction, and screen out the test image with the wrong classification result as the misclassified image. Among them, there may be several new classes that have not been registered in the misclassified image.
Step 730: the classification model is updated based on the plurality of misclassified images.
After determining the plurality of misclassified images, the server side can update the classification model according to the misclassified images, so that the classification capability of the classification model is improved.
In an embodiment, when the server updates the classification model according to a plurality of misclassified images, a plurality of new classes of sample images can be determined from the plurality of misclassified images.
The server can perform clustering processing on the plurality of misclassified images so as to obtain a plurality of clusters, and determine that the image of each cluster belongs to a new category. For each new category, the server may use several images in the center of the cluster as sample images of the new category.
After determining the sample image of the new category, the server may output second prompt information to the user terminal, where the second prompt information carries the sample image and is used to prompt for inputting a label. After the user checks the sample images at the user terminal, the user terminal can return the labels of the sample images. The server can obtain the label of each new category and initiate a new category registration request according to the label of each new category and the sample image. For example, the server determines that there are three new categories, and after obtaining tags corresponding to the three new categories, may take the new category as a target category, and initiate three new category registration requests corresponding to the three new categories.
In the present application, after the software module executing the classification model updating method obtains the new class registration request, the software module may continue to execute the updating process from step 310 to step 340.
By the aid of the measures, after a plurality of new categories are determined in the debugging process, registration can be initiated based on the new categories, so that the classification capability of the classification model is improved, and the classification model can effectively classify various categories which may appear.
Fig. 8 is an apparatus for updating a classification model according to an embodiment of the present invention, and as shown in fig. 8, the apparatus may include:
the obtaining module 810: acquiring a new category registration request; wherein the new category registration request contains a label and a sample image of a target category;
the judging module 820: judging whether the label of the target category is the same as the label of any registered category;
the determination module 830: if not, determining whether a feature extractor of a classification model can distinguish the target class from the registered class according to the sample image of the target class and the sample image of the registered class;
the update module 840: if so, adding the target class to a classification task of a classifier in the classification model.
The implementation process of the functions and actions of each module in the device is specifically detailed in the implementation process of the corresponding step in the update method of the classification model, and is not described herein again.
In the embodiments provided in the present application, the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (10)

1. A method for updating a classification model, comprising:
acquiring a new category registration request; wherein the new category registration request contains a label and a sample image of a target category;
judging whether the label of the target category is the same as the label of any registered category;
if not, determining whether a feature extractor of a classification model can distinguish the target class from the registered class according to the sample image of the target class and the sample image of the registered class;
if so, adding the target class to a classification task of a classifier in the classification model.
2. The method of claim 1, further comprising:
if the label of the target category is the same as that of any registered category, taking the registered category with the same label as the target category as a specified category;
and determining whether the classification model needs to be updated according to the sample image of the target class and the sample image of the specified class.
3. The method of claim 2, wherein determining whether the classification model needs to be updated based on the sample images of the target class and the sample images of the specified class comprises:
extracting, by the feature extractor, image features of the sample image of the target category and the sample image of the specified category;
judging whether the similarity of the image features between the target category and the specified category reaches a preset similarity threshold value;
if so, it is determined that the classification model does not need to be updated.
4. The method of claim 3, further comprising:
if not, acquiring a new label of the target category;
training the classification model based on the sample image and the new label of the target class and the sample image and the label of the registered class; wherein the classifier of the classification model is augmented with the classification task of the target class prior to training.
5. The method of claim 1, wherein determining whether a feature extractor of a classification model can distinguish the target class from the registered class based on the sample images of the target class and the sample images of the registered class comprises:
extracting, by the feature extractor, image features of the sample image of the target category and the sample image of each registered category;
judging whether the similarity of the image features between the target category and each registered category reaches a preset similarity threshold value or not;
if the similarity of the image features between the target category and all the registered categories does not reach the similarity threshold, determining that the feature extractor can distinguish the target category from the registered categories;
and if the similarity of the image features between the target category and any registered category reaches the similarity threshold, determining that the feature extractor cannot distinguish the target category from the registered category.
6. The method according to claim 1 or 5, characterized in that the method further comprises:
training the classification model based on the sample image and the label of the target class and the sample image and the label of the registered class if the feature extractor cannot distinguish the target class from the registered class; wherein the classifier of the classification model is augmented with the classification task of the target class prior to training.
7. The method of claim 1, further comprising:
classifying the plurality of test images through the classification model, and outputting a classification result of each test image;
screening out the test images with wrong classification results as misclassified images in response to the screening instructions for the plurality of test images;
the classification model is updated according to a plurality of misclassified images.
8. The method of claim 7, wherein updating the classification model based on a plurality of misclassified images comprises:
determining a number of new classes of sample images from the plurality of misclassified images;
and acquiring the label of each new category, and initiating a new category registration request according to the label of each new category and the sample image.
9. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of updating a classification model of any one of claims 1-8.
10. A computer-readable storage medium, characterized in that the storage medium stores a computer program executable by a processor to perform the method of updating a classification model according to any one of claims 1 to 8.
CN202110997980.7A 2021-08-27 2021-08-27 Classification model updating method and device, electronic equipment and storage medium Pending CN113705676A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110997980.7A CN113705676A (en) 2021-08-27 2021-08-27 Classification model updating method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110997980.7A CN113705676A (en) 2021-08-27 2021-08-27 Classification model updating method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113705676A true CN113705676A (en) 2021-11-26

Family

ID=78656186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110997980.7A Pending CN113705676A (en) 2021-08-27 2021-08-27 Classification model updating method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113705676A (en)

Similar Documents

Publication Publication Date Title
JP6924413B2 (en) Data generator, data generation method and data generation program
US10885618B2 (en) Inspection apparatus, data generation apparatus, data generation method, and data generation program
CN111693534B (en) Surface defect detection method, model training method, device, equipment and medium
TW202027007A (en) Computer-executed method and apparatus for assessing vehicle damage
JP2019109563A (en) Data generation device, data generation method, and data generation program
TWI716012B (en) Sample labeling method, device, storage medium and computing equipment, damage category identification method and device
CN113963147B (en) Key information extraction method and system based on semantic segmentation
CN114549993B (en) Method, system and device for grading line segment image in experiment and readable storage medium
KR20210020065A (en) Systems and methods for finding and classifying patterns in images with vision systems
CN115661160B (en) Panel defect detection method, system, device and medium
CN111242899A (en) Image-based flaw detection method and computer-readable storage medium
CN111652208A (en) User interface component identification method and device, electronic equipment and storage medium
CN114494780A (en) Semi-supervised industrial defect detection method and system based on feature comparison
CN113111903A (en) Intelligent production line monitoring system and monitoring method
CN117392042A (en) Defect detection method, defect detection apparatus, and storage medium
CN114972880A (en) Label identification method and device, electronic equipment and storage medium
CN114897868A (en) Pole piece defect identification and model training method and device and electronic equipment
CN111311545A (en) Container detection method, device and computer readable storage medium
CN112161984B (en) Wine positioning method, wine information management method, device, equipment and storage medium
Bugayong et al. Google tesseract: optical character recognition (OCR) on HDD/SSD labels using machine vision
CN114226262A (en) Flaw detection method, flaw classification method and flaw detection system
CN117173154A (en) Online image detection system and method for glass bottle
CN113705676A (en) Classification model updating method and device, electronic equipment and storage medium
CN111832550B (en) Data set manufacturing method and device, electronic equipment and storage medium
CN115859065A (en) Model evaluation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220308

Address after: Room 1511, 15 / F, block B, 3 Haidian Street, Haidian District, Beijing

Applicant after: ALNNOVATION (BEIJING) TECHNOLOGY Co.,Ltd.

Address before: Room 501, block a, Haier International Plaza, 939 Zhenwu Road, Jimo Economic Development Zone, Qingdao, Shandong 266200

Applicant before: Qingdao Chuangxin Qizhi Technology Group Co.,Ltd.