CN117671399A - Image classification and training method, device, equipment and medium for image classification model - Google Patents

Image classification and training method, device, equipment and medium for image classification model Download PDF

Info

Publication number
CN117671399A
CN117671399A CN202311667698.8A CN202311667698A CN117671399A CN 117671399 A CN117671399 A CN 117671399A CN 202311667698 A CN202311667698 A CN 202311667698A CN 117671399 A CN117671399 A CN 117671399A
Authority
CN
China
Prior art keywords
concept
image
representation
original
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311667698.8A
Other languages
Chinese (zh)
Inventor
尤晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Bank of China
Original Assignee
Agricultural Bank of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Bank of China filed Critical Agricultural Bank of China
Priority to CN202311667698.8A priority Critical patent/CN117671399A/en
Publication of CN117671399A publication Critical patent/CN117671399A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a training method, device, equipment and medium for image classification and an image classification model, and relates to the technical field of image processing. The method comprises the following steps: determining original concept representation of the image to be classified for each concept according to a concept mask of at least one concept of the image to be classified through an image classification model; determining importance weights of the original concept characterizations through an image classification model; determining each target concept representation of the image to be classified according to the importance weight of each original concept representation and each original concept representation through the image classification model; and determining the target category of the image to be classified according to each target concept representation of the image to be classified and the concept prototype of at least one image category through the image classification model. According to the technical scheme provided by the embodiment of the invention, each target concept representation of the image to be classified is obtained according to the importance weight of each original concept representation, and the category of the image to be classified is determined according to each target concept representation, so that the classification accuracy is improved.

Description

Image classification and training method, device, equipment and medium for image classification model
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a medium for training an image classification model.
Background
Image classification is needed in scenes such as identity verification, document classification, archiving and the like, so that corresponding processing is carried out according to classification results. At present, the meta learning algorithm is widely applied to tasks such as image recognition, text classification, face detection and the like in a small sample scene.
However, the prior art still has the defect of low classification accuracy when performing image classification.
Disclosure of Invention
The invention provides a training method, device, equipment and medium for image classification and an image classification model, so as to improve the accuracy of image classification.
In a first aspect, the present invention provides an image classification method, including:
determining original concept representation of the image to be classified for each concept according to a concept mask of at least one concept of the image to be classified through an image classification model;
determining importance weights of the original concept characterizations through an image classification model;
determining each target concept representation of the image to be classified according to the importance weight of each original concept representation and each original concept representation through the image classification model;
And determining the target category of the image to be classified according to each target concept representation of the image to be classified and the concept prototype of at least one image category through the image classification model.
In a second aspect, the present invention further provides a training method for an image classification model, including:
determining original concept representation of the image to be trained for each concept according to a concept mask of at least one concept of the image to be classified through an initial model;
determining importance weights of the original concept characterizations through an initial model;
determining each target concept representation of the image to be trained according to the importance weight of each original concept representation and each original concept representation through an initial model;
determining a prediction category of the image to be trained according to each target concept representation of the image to be trained and a concept prototype of at least one image category through an initial model;
training the initial model according to the predicted category and the real category of the image to be trained to obtain an image classification model; the image classification model is used for determining the target category of the image to be classified.
In a third aspect, the present invention also provides an image classification apparatus, including:
the original concept representation determining module is used for determining original concept representations of the images to be classified aiming at the concepts according to the concept mask of at least one concept of the images to be classified through the image classification model;
The importance weight determining module is used for determining the importance weight of each original concept representation through the image classification model;
the target concept representation determining module is used for determining each target concept representation of the image to be classified according to the importance weight of each original concept representation and each original concept representation through the image classification model;
the object category determining module is used for determining the object category of the image to be classified according to each object concept representation of the image to be classified and the concept prototype of at least one image category through the image classification model.
In a fourth aspect, the present invention further provides a training device for an image classification model, including:
the original concept representation determining module is used for determining original concept representations of the images to be trained aiming at the concepts according to the concept mask of at least one concept of the images to be classified through the initial model;
the importance weight determining module is used for determining the importance weight of each original concept representation through the initial model;
the target concept representation determining module is used for determining each target concept representation of the image to be trained according to the importance weight of each original concept representation and each original concept representation through the initial model;
The prediction category determining module is used for determining the prediction category of the image to be trained according to each target concept representation of the image to be trained and the concept prototype of at least one image category through the initial model;
the model training module is used for training the initial model according to the prediction category and the real category of the image to be trained to obtain an image classification model; the image classification model is used for determining the target category of the image to be classified.
In a fifth aspect, an embodiment of the present invention further provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the method comprises the steps of
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image classification method provided by any one of the embodiments of the invention or the training method of the image classification model provided by any one of the embodiments of the invention.
In a sixth aspect, an embodiment of the present invention further provides a computer readable storage medium, where the computer readable storage medium stores computer instructions, where the computer instructions are configured to cause a processor to implement, when executed, the image classification method provided by any one embodiment of the present invention, or the training method for the image classification model provided by any one embodiment of the present invention.
According to the embodiment of the invention, through an image classification model, according to a concept mask of at least one concept of an image to be classified, original concept representation of the image to be classified for each concept is determined; determining importance weights of the original concept characterizations through an image classification model; determining each target concept representation of the image to be classified according to the importance weight of each original concept representation and each original concept representation through the image classification model; and determining the target category of the image to be classified according to each target concept representation of the image to be classified and the concept prototype of at least one image category through the image classification model. According to the technical scheme, the importance weights of the original concept characterizations are determined, the target concept weights are determined according to the importance weights of the original concept characterizations, and then the target categories of the images to be classified are determined according to the target concept weights.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an image classification method according to a first embodiment of the present invention;
fig. 2A is a flowchart of an image classification method according to a second embodiment of the present invention;
fig. 2B is a schematic structural diagram of a concept sense module according to a second embodiment of the present invention;
FIG. 3A is a flow chart of a method of classifying images according to a third embodiment of the present invention;
fig. 3B is a schematic structural diagram of a conv-4 network according to a third embodiment of the present invention;
FIG. 4 is a flow chart of a method for classifying images according to a fourth embodiment of the present invention;
Fig. 5 is a schematic structural diagram of an image classification device according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of a training device for an image classification model according to a sixth embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device implementing an image classification method or a training method of an image classification model according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first" and "second" and the like in the description and the claims of the present invention and the above drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the technical scheme of the embodiment of the invention, the acquisition, storage, application and the like of the related images to be classified and the like all conform to the regulations of related laws and regulations, and the public sequence is not violated.
Example 1
Fig. 1 is a flowchart of an image classification method according to an embodiment of the present invention, where the method may be performed by an image classification device, and the image classification device may be implemented in hardware and/or software, and specifically configured in an electronic device, such as a server.
Referring to the image classification method shown in fig. 1, the method includes:
s101, determining original concept representation of the image to be classified aiming at each concept according to a concept mask of at least one concept of the image to be classified through an image classification model.
In this embodiment, the image classification model may be a deep learning model for determining a category to which an image to be classified belongs. The image to be classified may be an image to which the category to be determined belongs. Concepts may be features in the image to be classified; the concept mask can be expressed in the form of 0-1 binary vector, namely, the value of each element value in the concept mask is either 0 or 1, the size of the concept mask is the same as that of the image to be classified, and the dimension of the concept mask is the same as that of the image to be classified, so that the correlation between the pixel values of each pixel point in the image to be classified in different channels and the corresponding concepts is represented; wherein an element value of 1 indicates correlation and an element value of 0 indicates uncorrelation. The original concept representation can be represented in the form of a vector for representing the feature information of the concept mask corresponding concept in the image to be classified. Specifically, through an image classification model, a certain algorithm is adopted, and original concept representation of the image to be classified for each concept is determined according to a concept mask of at least one concept of the image to be classified.
For example, the image classification model may be a bird species image with finer granularity for classifying bird images into sparrow images, magpie images, bird's nest images, and crow images, and the concept may be understood as the parts of wings, beaks, and paws of birds. The values of the elements in the vector representation of the wing correspondence concept mask may be used to characterize the relevance of the pixel values of pixels in different dimensions in the bird image to the concept of wings. The original conceptual representation of the wings may be used to characterize the conceptual information of the wings in the bird image.
In an alternative embodiment, the size of the image that can be processed by the image classification model may be a first size, and the dimension of the image that can be processed may be the first dimension; the size of the concept mask may be a first size and the dimension may be a first dimension; before determining, by the image classification model, an original concept representation of the image to be classified for each concept according to a concept mask of at least one concept of the image to be classified, further comprising: detecting whether the size of an image to be classified is a first size or not and whether the dimension of the image to be classified is the first dimension or not through a preprocessing module in the image classification model; if the size of the image to be classified is not the first size, scaling the image to be classified and each conceptual mask of the image to be classified; if the dimension of the image to be classified is not the first dimension, dimension reduction is carried out on each conceptual mask of the image to be classified, so that the image to be classified with the dimension of the first dimension and each conceptual mask with the dimension of the first dimension are obtained. Wherein the first dimension and the first dimension are parameters of the image classification model.
S102, determining importance weights of the original concept characterizations through an image classification model.
In this embodiment, importance weights may be used to characterize how important each original concept representation is to image classification. Specifically, through an image classification model, a certain algorithm is adopted to determine the importance weight of each original concept representation.
S103, determining each target concept representation of the image to be classified according to the importance weight of each original concept representation and each original concept representation through the image classification model.
In this embodiment, the target concept representation may be the original concept representation after the importance change. Specifically, by means of an image classification model, for each original concept representation, multiplying the importance weight of the original concept representation by the vector representation of the original concept representation to obtain a target concept representation corresponding to the original concept representation. Illustratively, the target concept characterization may be determined by the following formula:
wherein,purpose of representing concept jThe mark concept is characterized; alpha (j) Importance weights representing the original concept representation of concept j; t is t (j) The original concept representation of concept j is represented.
S104, determining the target category of the image to be classified according to each target concept representation of the image to be classified and the concept prototype of at least one image category through the image classification model.
In this embodiment, the concept prototype may be represented by a vector form, and is used to characterize feature information of the concept in the image category. The image class may be a class that the image classification model is capable of classifying. Wherein each image category has the same concept; the concept prototypes of the same concept in each image category are different; the image classification model is preset with the association information between the concept mask and the concept prototype so as to associate the concept mask and the concept prototype of the same concept. The target class may be the class to which the image to be classified belongs. Specifically, through an image classification model, a certain algorithm is adopted, and the target category of the image to be classified is determined according to each target concept representation of the image to be classified and the concept prototype of at least one image category.
For example, the image to be classified is an avian image; the concept mask of the image to be classified includes a concept mask of wings, a concept mask of beaks, and a concept mask of paws. The image categories include sparrow category, magpie category, swallow category and crow category; each image category includes three concepts of wings, beaks and paws, i.e., each image category includes a conceptual prototype of wings, a conceptual prototype of beaks and a conceptual prototype of paws. The conceptual prototype of the wings in the sparrow category is used for representing characteristic information of the sparrow wings; conceptual prototypes of wings in magpie categories are used to characterize magpie wings. The characteristic information of the sparrow wings is different from the characteristic information of the magpie wings, namely, the conceptual prototype of the wings in the sparrow category is different from the conceptual prototype of the wings in the magpie category.
Specifically, through an image classification model, a certain algorithm is adopted, and the target category of the image to be classified is determined according to each target concept representation of the image to be classified and the concept prototype of at least one image category.
According to the embodiment of the invention, through an image classification model, according to a concept mask of at least one concept of an image to be classified, original concept representation of the image to be classified for each concept is determined; determining importance weights of the original concept characterizations through an image classification model; determining each target concept representation of the image to be classified according to the importance weight of each original concept representation and each original concept representation through the image classification model; and determining the target category of the image to be classified according to each target concept representation of the image to be classified and the concept prototype of at least one image category through the image classification model. According to the technical scheme provided by the embodiment of the invention, the importance weights of the original concept characterizations are determined, the target concept weights are determined according to the importance weights of the original concept characterizations, and then the target category of the image to be classified is determined according to the target concept weights.
Example two
Fig. 2A is a flowchart of an image classification method according to a second embodiment of the present invention, where the determining operation of importance weights of original concept characterizations is optimized and improved based on the technical solution of the foregoing embodiment.
Further, the importance weight for determining the original concept characterizations is thinned to be reduced to be dimension-reduced through a concept perception module in the image classification model, so that characterization values of the original concept characterizations are obtained; and carrying out nonlinear space projection on the characterization values of the original concept characterizations through a concept perception module to obtain importance weights of the original concept characterizations so as to perfect the determination operation of the importance weights of the original concept characterizations.
In the embodiments of the present invention, the details are not described, and reference may be made to the description of the foregoing embodiments.
Referring to fig. 2A, the image classification method includes:
s201, determining original concept representation of the image to be classified for each concept according to a concept mask of at least one concept of the image to be classified through an image classification model.
S202, compressing each original concept representation through a concept perception module in the image classification model to obtain a representation value of each original concept representation.
In this embodiment, the representation value may be in the form of a numerical value for representing the corresponding original concept representation. Alternatively, the original concept representation may be represented in the form of a one-dimensional vector; the concept perception module comprises a pooling layer. Specifically, through a pooling layer in the concept perception module, the average value of each element value in the original concept representation is determined, and the average value of each element value is determined as a representation value. Illustratively, the characterization value of the original conceptual characterization can be obtained by the following formula:
wherein t' (j) A representation value representing an original representation of concept j; t is t (j) An original concept representation representing concept j; d represents the vector length of the original concept representation; t is t (j) (m) represents the value of the mth element in the original concept representation of concept j.
S203, performing nonlinear space projection on the characterization values of the original concept characterizations through a concept perception module to obtain importance weights of the original concept characterizations.
In this embodiment, the nonlinear spatial projection may refer to a process of mapping data from one space to another space, and this mapping process is nonlinear. Specifically, through a concept perception module, a certain algorithm is adopted to carry out nonlinear space projection on the characterization value of each original concept characterization, so as to obtain the importance weight of each original concept characterization.
Optionally, the concept sensing module includes a first linear layer, a mix activation layer, a second linear layer and a sigmoid activation layer; the pooling layer, the first linear layer, the Mish activation layer, the second linear layer and the sigmoid activation layer are sequentially connected; carrying out nonlinear space projection on the characterization values of each original concept characterization through a concept perception module to obtain importance weights of each original concept characterization, wherein the method comprises the following steps: processing the characterization value of each original concept characterization through a first linear layer in the concept perception module to obtain a first linear processing result of each original concept characterization; processing the first linear processing results of the original concept representation through a Mish activation layer in the concept perception module to obtain first activation processing results of the original concept representation; processing the first activation result of each original concept representation through a second linear layer in the concept perception module to obtain a second linear processing result of each original concept representation; and processing a second linear processing result of each original concept representation through a sigmoid activation layer in the concept perception module to obtain importance weights of the original concept representations.
Specifically, through a first linear layer in the concept perception module, carrying out linear transformation on the characterization values of each original concept characterization to obtain a first linear processing result of each original concept characterization; processing the first linear processing results of the original concept representation by a Mish activation layer in the concept perception module and adopting a Mish activation function to obtain first activation processing results of the original concept representation; performing linear transformation on the first activation result of each original concept representation through a second linear layer in the concept perception module to obtain a second linear processing result of each original concept representation; and processing the second linear processing result of each original concept representation by adopting a sigmoid activation function through a sigmoid activation layer in the concept perception module to obtain the importance weight of the original concept representation.
Illustratively, the importance weights of the original concept characterizations can be obtained by the following formula:
α (j) =σ{W 2 [δ(W 1 (t′ (j) )]};
wherein W is 1 Representing a linear transformation function performed by the first linear layer; delta represents the activation function with Mish; w (W) 2 Representing a linear transformation function performed by the second linear layer;sigma represents a sigmoid activation function.
Alternatively, fig. 2B is a schematic structural diagram of a concept perception module. As shown in fig. 2B, includes a pooling layer, a first linear layer, a mix activation layer, a second linear layer, and a sigmoid activation layer. The input of the pooling layer is the characterization value of each original concept characterization; the output of the pooling layer is the input of the first linear layer; the output of the first linear layer is the input of the hash activation layer; the output of the flash activation layer is the input of the second linear layer; the output of the second linear layer is the input of the sigmoid activation layer; the output of the sigmoid activation layer characterizes the importance weights for each original concept.
In this embodiment, the first linear layer and the dash active layer may be used to perform a first nonlinear spatial transformation; the second linear layer and the sigmoid activation layer may be used to perform a second nonlinear spatial transformation; by adopting the technical scheme of the embodiment, the nonlinearity of the space projection can be ensured, the space projection has stronger concept expression capability, the situation of non-independent activation is avoided, and the accuracy of the importance weight is improved.
S204, determining each target concept representation of the image to be classified according to the importance weight of each original concept representation and each original concept representation through the image classification model.
S205, determining the target category of the image to be classified according to each target concept representation of the image to be classified and the concept prototype of at least one image category through the image classification model.
According to the embodiment of the invention, the concept perception module in the image classification model compresses each original concept representation to obtain the representation value of each original concept representation; and carrying out nonlinear space projection on the characterization values of the original concept characterizations through a concept perception module to obtain importance weights of the original concept characterizations. According to the technical scheme provided by the embodiment of the invention, the importance weight of each original concept representation is obtained by determining the representation value of each original concept representation and carrying out nonlinear space projection on the representation value of each original concept representation, so that the perceptibility of the representation value of each original concept representation is improved, and the accuracy of the importance weight is improved.
Example III
Fig. 3A is a flowchart of an image classification method according to a third embodiment of the present invention, where the method according to the embodiment of the present invention optimizes and improves the operations of acquiring the record data before migration and the record data after migration based on the technical solution of the foregoing embodiment.
Further, determining that the target category of the image to be classified is 'thinned' for each image category according to the target concept representation of the image to be classified and the concept prototype of at least one image category through the image classification model, and determining the confidence that the image to be classified belongs to the image category according to the target concept representation of the image to be classified and the concept prototype of the image category; taking the image category with the highest confidence as the target category of the image to be classified so as to perfect the determining operation of the target category of the image to be classified.
In the embodiments of the present invention, the details are not described, and reference may be made to the description of the foregoing embodiments.
Referring to fig. 3A, the image classification method includes:
s301, determining original concept representation of the image to be classified aiming at each concept according to a concept mask of at least one concept of the image to be classified through an image classification model.
Optionally, determining, by the image classification model, an original concept representation of the image to be classified for each concept according to a concept mask of at least one concept of the image to be classified, including: determining concept images of the concept masks according to the concept masks and the images to be classified by a concept embedding module aiming at each concept mask; and extracting features of the concept image of the concept mask through the concept embedding module, and determining the original concept representation of the concept mask according to the feature extraction result of the concept image.
In this embodiment, the concept image may be a result of a significant representation of a concept in the image to be classified, that is, an image obtained by reserving pixel values related to a concept mask in the image to be classified and clearing pixel values unrelated to the concept mask in the image to be classified.
Specifically, the concept perception module may include a concept image determiner, at least one concept learner, and an original concept characterization determiner; wherein, the corresponding concepts of each concept learner are different; the association relation between the concept mask and the concept learner is preset in the image classification model so that the concept mask of the same concept and the concept learner are associated with each other. Carrying out Hadamard product operation on the concept mask and the images to be classified according to each concept mask by a concept image determiner in the concept perception module to obtain concept images of the concept mask, namely concept images of concepts corresponding to the concept mask; extracting features of the concept image through a concept learner associated with the concept mask management in the concept perception module to obtain a feature extraction result of the concept image; and splicing the feature extraction result of the concept image into a one-dimensional vector through an original concept representation determiner in the concept perception module to obtain an original concept representation corresponding to the concept mask, namely the original concept representation of the concept corresponding to the concept mask. Illustratively, the original conceptual representation may be determined by the following formula:
Wherein,representing feature extraction operation in a concept learner corresponding to the concept j; x is x q Representing an image to be classified; c (j) A concept mask representing a concept j; />Representing Hadamard product operation; p represents the concatenation operation in the original concept representation determiner.
It can be understood that by adopting the technical scheme, the pixel values irrelevant to the concept corresponding to the concept mask in the image to be classified are removed by determining the concept image, so that the feature extraction is performed on the concept image, and the correlation between the feature extraction result and the concept is improved; and the feature extraction results are spliced to obtain the original concept representation, so that the calculated data volume can be further reduced and the image classification efficiency can be improved under the condition of improving the accuracy of the original concept representation.
Alternatively, the concept learner is a Conv-4 network composed of four convolution blocks, and fig. 3B is a schematic structural diagram of the Conv-4 network, including four convolution blocks, namely, a first block, a second block, a third block and a fourth block. Convolution layers, BN (Batch Normalization ) layers, reLU (Rectified Linear Unit, modified linear units) activation layers, and pooling layers in the first, second, and fourth blocks; the third block includes a convolutional layer, a BN layer, and a ReLU active layer. The convolution layer is used for extracting features in the image, and the BN layer is used for normalizing the extracted features; the ReLU activation layer is used for carrying out nonlinear transformation on the normalization result of the BN layer; the pooling layer is used to adjust the image size. Each convolution block may specifically perform a linear convolution operation, a normalization operation, and a ReLU nonlinear activation operation. Each convolution layer adopts convolution kernels with the size of 3*3 and the step length of 1, and the number of the convolution kernels is 64. The pooling layer in the first block and the second block is a maximum pooling layer of 2x 2; the pooling layer in the fourth block is an adaptive pooling layer volume and is used for outputting a feature map with the dimension of 7x7 and the dimension of 64 dimensions.
S302, determining importance weights of the original concept characterizations through an image classification model.
S303, determining each target concept representation of the image to be classified according to the importance weight of each original concept representation and each original concept representation through the image classification model.
S304, determining the confidence coefficient of the image to be classified belonging to the image category according to the target concept representation of the image to be classified and each concept prototype of the image category by the image classification model.
Specifically, the image classification model further comprises a nearest neighbor classifier. Determining, for each image class, euclidean distances between the target concept representation and the concept prototypes of the same concept in the image class by means of nearest neighbor classifiers in the image classification model; accumulating Euclidean distances between each target concept representation and corresponding concept prototypes in the image category to obtain a first summation of the image category; accumulating the first sums of the image categories to obtain a second sum; and determining the confidence degree of the image to be classified belonging to the image category according to the first addition and the second addition.
By way of example, the confidence level of an image to be classified may be determined by the following formula:
Wherein,the Euclidean distance between the target concept representation representing concept j and the concept prototype of concept j in image class k; />The Euclidean distance between the target concept representation representing concept j and the concept prototype of concept j in image class q; />A concept prototype representing a concept j in an image class k; />A concept prototype representing a concept q in an image class k; n represents the total number of concepts; r represents the total number of image categories; p is p θ (y=k|x q ) Representing an image x to be classified q Confidence belonging to image class k; exp represents an exponential function.
S305, taking the image category with the highest confidence as the target category of the image to be classified through the image classification model.
In a specific implementation manner, the technical scheme of the embodiment of the invention is adopted to determine the image classification of the CUB (university of california san Diego bird dataset) dataset, and the methods of Matchinnet (matching network), relationship net, metaOptNet (meta-optimization network) and ProtoNet (prototype network) in the prior art are adopted to determine the image classification of the CUB dataset, and compared with the methods of MatchingNet, relationNet, metaOptNet and ProtoNet, the accuracy is respectively improved by 10.8%, 8.1%, 7.1% and 10.6%.
Aiming at each image category, the embodiment of the invention determines the confidence coefficient of the image to be classified belonging to the image category according to the target concept representation of the image to be classified and the concept prototype of each image category; the image category with the highest confidence is used as the target category of the image to be classified, so that the accuracy of determining the target category of the image to be classified is improved.
Example IV
Fig. 4 is a flowchart of a training method for an image classification model according to a fourth embodiment of the present invention, where the method may be performed by a training device for an image classification model, and the training device for an image classification model may be implemented in hardware and/or software and specifically configured in an electronic device, such as a server
Referring to fig. 4, the image classification method includes:
s401, determining original concept representation of the image to be trained aiming at each concept according to a concept mask of at least one concept of the image to be classified through an initial model.
S402, determining importance weights of the original concept representation through an initial model.
S403, determining each target concept representation of the image to be trained according to the importance weight of each original concept representation and each original concept representation through the initial model.
S404, determining the prediction category of the image to be trained according to each target concept representation of the image to be trained and the concept prototype of at least one image category through the initial model.
S405, training an initial model according to the predicted category and the real category of the image to be trained to obtain an image classification model; the image classification model is used for determining the target category of the image to be classified.
In this embodiment, the initial model may include a concept embedding module, a concept perception module, and a nearest neighbor classifier. The concept embedding module comprises a concept image determiner, a concept learner of at least one concept and an original concept representation determiner; the output of the concept learner is the input of the original concept representation determiner; the concept perception module comprises a pooling layer, a first linear layer, a mish activation layer, a second linear layer, a sigmoid activation layer and a target concept representation determiner. The output of the original concept representation determiner is the input of a pooling layer in the concept perception module; the output of the pooling layer in the concept perception module is the input of the first linear layer; the output of the first linear layer is the input of the hash activation layer; the output of the flash activation layer is the input of the second linear layer; the output of the second linear layer is the input of the sigmoid activation layer; the output of the sigmoid activation layer and the output of the original concept characterizer are the inputs of the target concept characterization determiner; the output of the target concept representation determiner is the input of the nearest neighbor classifier; the nearest neighbor classifier outputs the processing result of the initial model.
In a model training stage, acquiring a sample image, a concept mask of at least one concept of the sample image, and an image category of the sample image; the sample image is different from the image to be trained; the number of sample images is usually small, for example, there are 5 image categories, and each image category includes one sample image, i.e. 5 sample images; or 5 image categories, each image category comprises 5 sample images, namely 25 sample images. For each image category, determining, by a concept embedding module of the initial model, an original concept representation of the sample image for each concept under the image category according to a concept mask of at least one concept of the sample image under the image category; determining importance weights of the original concept characterizations through a concept perception module of the initial model; determining each target concept representation of the sample image under the image category according to the importance weight of each original concept representation and each original concept representation by a target concept representation determiner of a concept perception module in the initial model; determining concept prototypes of concepts under the image category according to the target concept characterization of the sample image and the sample image quantity under the image category; illustratively, the conceptual prototype may be determined by the following formula:
Wherein P is k (j) A concept prototype representing each concept j of the image category k; z (S) k ) A total number of sample images representing image class k; s is S k A sample image set representing an image class k; x is x i Representing a sample image i;the target concept representation of concept j representing sample image i.
The method comprises the steps of obtaining an image to be trained, a concept mask of at least one concept of the image to be trained, and a true category of the image to be trained. Specifically, through a concept embedding module of the initial model, determining an original concept representation of the image to be trained for each concept according to a concept mask of at least one concept of the image to be trained; determining importance weights of the original concept characterizations through a concept perception module of the initial model; determining each target concept representation of the image to be trained according to the importance weight of each original concept representation and each original concept representation by a target concept representation determiner of a concept perception module in the initial model; determining a prediction category of the image to be trained according to each target concept representation of the image to be trained and a concept prototype of at least one image category through a nearest neighbor classifier in the image classification model; constructing a loss function according to the prediction type of the image to be trained and the real type of the image to be trained, and updating parameters to be trained in the initial model by adopting the loss function; the parameters to be trained comprise parameters in a concept learner, parameters of a first linear layer and parameters of a second linear layer in a concept perception module.
Optionally, before acquiring the image to be trained, the concept mask of at least one concept of the image to be trained, and the label category of the image to be trained, the method further includes: acquiring a sample image, a concept mask of at least one concept of the sample image, and a label category of the sample image; the sample image is different from the image to be trained;
optionally, determining, by the initial model, importance weights of each original concept representation includes: compressing each original concept representation through a concept perception module in the initial model to obtain a representation value of each original concept representation; and carrying out nonlinear space projection on the characterization values of the original concept characterizations through a concept perception module to obtain importance weights of the original concept characterizations.
Optionally, performing nonlinear spatial projection on the characterization value of each original concept characterization by using a concept perception module to obtain an importance weight of each original concept characterization, including: processing the characterization value of each original concept characterization through a first linear layer in the concept perception module to obtain a first linear processing result of each original concept characterization; processing the first linear processing results of the original concept representation through a Mish activation layer in the concept perception module to obtain first activation processing results of the original concept representation; processing the first activation result of each original concept representation through a second linear layer in the concept perception module to obtain a second linear processing result of each original concept representation; and processing a second linear processing result of each original concept representation through a sigmoid activation layer in the concept perception module to obtain importance weights of each original concept representation.
Optionally, determining the prediction category of the image to be trained according to each target concept representation of the image to be trained and the concept prototype of at least one image category includes: aiming at each image category, determining the confidence coefficient of the image to be trained belonging to the image category according to the target concept representation of the image to be trained and each concept prototype of the image category; and taking the image type with the highest confidence as the prediction type of the image to be trained.
Optionally, determining, by the initial model, an original concept representation of the image to be trained for each concept according to a concept mask of at least one concept of the image to be trained, including: determining concept images of the concept masks according to the concept masks and the images to be trained by a concept embedding module in the initial model aiming at each concept mask; and extracting features of the concept image of the concept mask through the concept embedding module, and determining the original concept representation of the concept mask according to the feature extraction result of the concept image.
According to the technical scheme, original concept representation of the image to be trained for each concept is determined according to the concept mask of at least one concept of the image to be classified through an initial model; determining importance weights of the original concept characterizations through an initial model; determining each target concept representation of the image to be trained according to the importance weight of each original concept representation and each original concept representation through an initial model; determining a prediction category of the image to be trained according to each target concept representation of the image to be trained and a concept prototype of at least one image category through an initial model; training the initial model according to the predicted category and the real category of the image to be trained to obtain an image classification model; the image classification model is used for determining the target category of the image to be classified. According to the image classification model obtained through training by the technical scheme, the importance weight of each original concept representation can be determined, the target concept weight is determined according to the importance weight of each original concept representation, and then the target category of the image to be classified is determined according to the target concept weight.
Example five
Fig. 5 is a schematic structural diagram of an image classification device according to a fifth embodiment of the present invention. The embodiment of the invention can be applied to the situation of classifying images, the device can execute an image classification method, the image classification device can be realized in the form of hardware and/or software, and the device can be configured in electronic equipment, such as a server.
Referring to the image classification apparatus shown in fig. 5, it includes an original concept representation determining module 501, an importance weight determining module 502, a target concept representation determining module 503, and a target category determining module 504, wherein,
the original concept representation determining module 501 is configured to determine, according to a concept mask of at least one concept of an image to be classified, an original concept representation of the image to be classified for each concept by using the image classification model;
the importance weight determining module 502 is configured to determine an importance weight of each original concept representation through the image classification model;
the target concept representation determining module 503 is configured to determine, by using the image classification model, each target concept representation of the image to be classified according to the importance weight of each original concept representation and each original concept representation;
the object class determining module 504 is configured to determine, by using the image classification model, an object class of the image to be classified according to each object concept representation of the image to be classified and the concept prototype of at least one image class.
According to the embodiment of the invention, through an original concept representation determining module, original concept representations of the images to be classified for all concepts are determined according to concept masks of at least one concept of the images to be classified through an image classification model; determining importance weights of the original concept characterizations through an importance weight determining module and an image classification model; determining each target concept representation of the image to be classified according to the importance weight of each original concept representation and each original concept representation through the image classification model by the target concept representation determining module; and determining the target category of the image to be classified according to each target concept representation of the image to be classified and the concept prototype of at least one image category through the image classification model through the target category determining module. According to the technical scheme provided by the embodiment of the invention, the importance weights of the original concept characterizations are determined, the target concept weights are determined according to the importance weights of the original concept characterizations, and then the target category of the image to be classified is determined according to the target concept weights.
Optionally, the importance weight determining module 502 includes:
the characterization value determining unit is used for compressing each original concept characterization through the concept perception module in the image classification model to obtain a characterization value of each original concept characterization;
the importance weight determining unit is used for carrying out nonlinear space projection on the characterization values of the original concept characterizations through the concept perception module to obtain the importance weights of the original concept characterizations.
Optionally, the target concept characterization determining unit is specifically configured to:
processing the characterization value of each original concept characterization through a first linear layer in the concept perception module to obtain a first linear processing result of each original concept characterization;
processing the first linear processing results of the original concept representation through a Mish activation layer in the concept perception module to obtain first activation processing results of the original concept representation;
processing the first activation result of each original concept representation through a second linear layer in the concept perception module to obtain a second linear processing result of each original concept representation;
and processing a second linear processing result of each original concept representation through a sigmoid activation layer in the concept perception module to obtain importance weights of the original concept representations.
Optionally, the target category determination module 504 includes:
aiming at each image category, determining the confidence coefficient of the image to be classified belonging to the image category according to the target concept representation of the image to be classified and each concept prototype of the image category;
and taking the image category with the highest confidence as the target category of the image to be classified.
Optionally, the original concept characterization determination module 501 includes:
a concept image determining unit for determining a concept image of the concept mask according to the concept mask and the image to be classified for each concept mask through the concept embedding module;
the original concept representation determining unit is used for extracting features of the concept images of the concept masks through the concept embedding module, and determining the original concept representations of the concept masks according to the feature extraction results of the concept images.
The image classification device provided by the embodiment of the invention can execute the image classification method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the image classification method.
Example six
Fig. 6 is a schematic structural diagram of an image classification device according to a sixth embodiment of the present invention. The embodiment of the invention can be applied to the condition of training an image classification model, the device can execute an image classification method, the image classification device can be realized in the form of hardware and/or software, and the device can be configured in electronic equipment, such as a server.
Referring to the image classification apparatus shown in fig. 6, it includes an original concept representation determining module 601, an importance weight determining module 602, a target concept representation determining module 603, a prediction category determining module 604, and a model training module 605, wherein,
the original concept representation determining module 601 is configured to determine, according to a concept mask of at least one concept of the image to be classified, an original concept representation of the image to be trained for each concept through an initial model;
an importance weight determining module 602, configured to determine an importance weight of each original concept representation through an initial model;
the target concept representation determining module 603 is configured to determine, through an initial model, each target concept representation of the image to be trained according to the importance weight of each original concept representation and each original concept representation;
a prediction category determining module 604, configured to determine, by using the initial model, a prediction category of the image to be trained according to each target concept representation of the image to be trained and the concept prototype of at least one image category;
the model training module 605 is configured to train the initial model according to the predicted category and the real category of the image to be trained, so as to obtain an image classification model; the image classification model is used for determining the target category of the image to be classified.
According to the method, an original concept representation determining module is used for determining original concept representations of images to be trained aiming at concepts according to concept masks of at least one concept of the images to be classified through an initial model; determining importance weights of the original concept characterizations through an importance weight determining module and an initial model; determining each target concept representation of the image to be trained according to the importance weight of each original concept representation and each original concept representation through an initial model by a target concept representation determining module; determining a prediction category of the image to be trained according to each target concept representation of the image to be trained and a concept prototype of at least one image category through an initial model by a prediction category determining module; training the initial model according to the predicted category and the real category of the image to be trained by a model training module to obtain an image classification model; the image classification model is used for determining the target category of the image to be classified. According to the image classification model obtained through training of the technical scheme provided by the embodiment of the invention, the importance weight of each original concept representation can be determined, the target concept weight is determined according to the importance weight of each original concept representation, and then the target class of the image to be classified is determined according to the target concept weight.
Example seven
Fig. 7 shows a schematic diagram of an electronic device 700 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 7, the electronic device 700 includes at least one processor 701, and a memory, such as a Read Only Memory (ROM) 702, a Random Access Memory (RAM) 703, etc., communicatively connected to the at least one processor 701, in which the memory stores a computer program executable by the at least one processor, and the processor 701 may perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 702 or the computer program loaded from the storage unit 708 into the Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 may also be stored. The processor 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the electronic device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the electronic device 700 to exchange information/data with other devices through a computer network, such as the internet, and/or various telecommunication networks.
The processor 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 701 performs the various methods and processes described above, such as an image classification method or a training method for an image classification model.
In some embodiments, the image classification method or the training method of the image classification model may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 700 via the ROM 702 and/or the communication unit 709. When the computer program is loaded into RAM 703 and executed by processor 701, one or more steps of the image classification method or training method of the image classification model described above may be performed. Alternatively, in other embodiments, the processor 701 may be configured to perform the image classification method or the training method of the image classification model in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable image classification apparatus, such that the computer programs, when executed by the processor, cause the functions/operations specified in the flowchart and/or block diagram to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS (Virtual Private Server ) service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method of classifying images, the method comprising:
determining original concept representation of the image to be classified for each concept according to a concept mask of at least one concept of the image to be classified through an image classification model;
determining importance weights of the original concept characterizations through the image classification model;
determining each target concept representation of the image to be classified according to the importance weight of each original concept representation and each original concept representation through the image classification model;
And determining the target category of the image to be classified according to each target concept representation of the image to be classified and the concept prototype of at least one image category through the image classification model.
2. The method of claim 1, wherein said determining, by said image classification model, importance weights for each of said original concept representations comprises:
compressing each original concept representation through a concept perception module in the image classification model to obtain a representation value of each original concept representation;
and carrying out nonlinear space projection on the characterization values of the original concept characterizations through the concept perception module to obtain importance weights of the original concept characterizations.
3. The method according to claim 2, wherein said performing, by the concept perception module, nonlinear spatial projection on the representation values of each of the original concept representations to obtain importance weights of each of the original concept representations includes:
processing the characterization value of each original concept characterization by a first linear layer in the concept perception module to obtain a first linear processing result of each original concept characterization;
Processing the first linear processing result of each original concept representation through a Mish activation layer in the concept perception module to obtain a first activation processing result of the original concept representation;
processing the first activation result of each original concept representation through a second linear layer in the concept perception module to obtain a second linear processing result of each original concept representation;
and processing a second linear processing result of each original concept representation through a sigmoid activation layer in the concept perception module to obtain importance weights of each original concept representation.
4. The method of claim 1, wherein said determining the target class of the image to be classified based on each target concept representation of the image to be classified and a concept prototype of at least one image class comprises:
for each image category, determining the confidence coefficient of the image to be classified belonging to the image category according to the target concept representation of the image to be classified and each concept prototype of the image category;
and taking the image category with the maximum confidence as the target category of the image to be classified.
5. The method of claim 1, wherein said determining, by the image classification model, from concept masks of at least one concept of the image to be classified, an original concept representation of the image to be classified for each of said concept masks comprises:
determining, by a concept embedding module in the image classification model, for each concept mask, a concept image of the concept mask according to the concept mask and the image to be classified;
and extracting features of the concept image of the concept mask through the concept embedding module, and determining the original concept representation of the concept mask according to the feature extraction result of the concept image.
6. A method of training an image classification model, the model comprising:
determining original concept representation of the image to be trained for each concept according to a concept mask of at least one concept of the image to be classified through an initial model;
determining importance weights of the original concept characterizations through the initial model;
determining each target concept representation of the image to be trained according to the importance weight of each original concept representation and each original concept representation through the initial model;
Determining a prediction category of the image to be trained according to each target concept representation of the image to be trained and a concept prototype of at least one image category through the initial model;
training the initial model according to the predicted category and the real category of the image to be trained to obtain an image classification model; the image classification model is used for determining target categories of images to be classified.
7. An image classification apparatus, the apparatus comprising:
the original concept representation determining module is used for determining original concept representations of the images to be classified for all the concepts according to concept masks of at least one concept of the images to be classified through the image classification model;
the importance weight determining module is used for determining the importance weight of each original concept representation through the image classification model;
the target concept representation determining module is used for determining each target concept representation of the image to be classified according to the importance weight of each original concept representation and each original concept representation through the image classification model;
and the target category determining module is used for determining the target category of the image to be classified according to each target concept representation of the image to be classified and the concept prototype of at least one image category through the image classification model.
8. An apparatus for training an image classification model, the apparatus comprising:
the original concept representation determining module is used for determining original concept representations of the images to be trained for all the concepts according to the concept mask of at least one concept of the images to be classified through the initial model;
the importance weight determining module is used for determining the importance weight of each original concept representation through the initial model;
the target concept representation determining module is used for determining each target concept representation of the image to be trained according to the importance weight of each original concept representation and each original concept representation through the initial model;
the prediction category determining module is used for determining the prediction category of the image to be trained according to each target concept representation of the image to be trained and the concept prototype of at least one image category through the initial model;
the model training module is used for training the initial model according to the predicted category and the real category of the image to be trained to obtain an image classification model; the image classification model is used for determining target categories of images to be classified.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the image classification method of any one of claims 1-5 or the training method of the image classification model of claim 6.
10. A computer readable storage medium storing computer instructions for causing a processor to perform the image classification method of any one of claims 1-5 or the training method of the image classification model of claim 6.
CN202311667698.8A 2023-12-06 2023-12-06 Image classification and training method, device, equipment and medium for image classification model Pending CN117671399A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311667698.8A CN117671399A (en) 2023-12-06 2023-12-06 Image classification and training method, device, equipment and medium for image classification model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311667698.8A CN117671399A (en) 2023-12-06 2023-12-06 Image classification and training method, device, equipment and medium for image classification model

Publications (1)

Publication Number Publication Date
CN117671399A true CN117671399A (en) 2024-03-08

Family

ID=90065745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311667698.8A Pending CN117671399A (en) 2023-12-06 2023-12-06 Image classification and training method, device, equipment and medium for image classification model

Country Status (1)

Country Link
CN (1) CN117671399A (en)

Similar Documents

Publication Publication Date Title
CN110555372A (en) Data entry method, device, equipment and storage medium
CN113705425B (en) Training method of living body detection model, and method, device and equipment for living body detection
CN111814810A (en) Image recognition method and device, electronic equipment and storage medium
CN112784778B (en) Method, apparatus, device and medium for generating model and identifying age and sex
CN112861885B (en) Image recognition method, device, electronic equipment and storage medium
CN113222942A (en) Training method of multi-label classification model and method for predicting labels
CN113869449A (en) Model training method, image processing method, device, equipment and storage medium
CN113255557A (en) Video crowd emotion analysis method and system based on deep learning
CN113887615A (en) Image processing method, apparatus, device and medium
CN113627361B (en) Training method and device for face recognition model and computer program product
CN114120454A (en) Training method and device of living body detection model, electronic equipment and storage medium
CN113963197A (en) Image recognition method and device, electronic equipment and readable storage medium
CN113221842A (en) Model training method, image recognition method, device, equipment and medium
CN112818946A (en) Training of age identification model, age identification method and device and electronic equipment
CN115457329B (en) Training method of image classification model, image classification method and device
CN116363444A (en) Fuzzy classification model training method, fuzzy image recognition method and device
CN117671399A (en) Image classification and training method, device, equipment and medium for image classification model
CN113887630A (en) Image classification method and device, electronic equipment and storage medium
CN114882334A (en) Method for generating pre-training model, model training method and device
CN113344064A (en) Event processing method and device
CN117746069B (en) Graph searching model training method and graph searching method
CN116012873B (en) Pedestrian re-identification method and device, electronic equipment and storage medium
CN111625672B (en) Image processing method, image processing device, computer equipment and storage medium
CN113807413B (en) Object identification method and device and electronic equipment
CN115147679B (en) Multi-mode image recognition method and device, model training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination