CN111626371B - Image classification method, device, equipment and readable storage medium - Google Patents

Image classification method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN111626371B
CN111626371B CN202010476801.0A CN202010476801A CN111626371B CN 111626371 B CN111626371 B CN 111626371B CN 202010476801 A CN202010476801 A CN 202010476801A CN 111626371 B CN111626371 B CN 111626371B
Authority
CN
China
Prior art keywords
similarity
image
category
registration
classified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010476801.0A
Other languages
Chinese (zh)
Other versions
CN111626371A (en
Inventor
白雨辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN202010476801.0A priority Critical patent/CN111626371B/en
Publication of CN111626371A publication Critical patent/CN111626371A/en
Application granted granted Critical
Publication of CN111626371B publication Critical patent/CN111626371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image classification method, which comprises the following steps: acquiring an image to be classified, and carrying out feature extraction processing on the image to be classified to obtain features to be classified; determining a plurality of registration sample features corresponding to the registration categories, and performing feature calculation by using each registration sample feature and the feature to be classified to obtain a plurality of difference features; each registration category corresponds to at least one registration sample feature; matching the difference features by using a classifier to obtain the similarity corresponding to each difference feature; determining target similarity by using the similarity, and determining the category of the image to be classified as a target registration category corresponding to the target similarity; the method utilizes the second layer of feature extraction and selects the category with highest possibility from a plurality of categories as the category of the image to be classified, thereby improving the classification accuracy; in addition, the invention also provides an image classification device, an image classification device and a computer readable storage medium, which also have the beneficial effects.

Description

Image classification method, device, equipment and readable storage medium
Technical Field
The present invention relates to the field of image classification technologies, and in particular, to an image classification method, an image classification device, an image classification apparatus, and a computer readable storage medium.
Background
Feature extraction is a concept in computer vision and image processing. It refers to the use of a computer to extract image information and determine whether the point of each image belongs to an image feature. The common feature extraction algorithm is mainly divided into two aspects, namely, feature extraction is performed based on a matrix or feature descriptors; on the other hand, feature extraction is performed based on a deep learning method. Because the traditional classification method based on the feature descriptors has smaller occupancy rate to the memory, the related technology frequently adopts the traditional method to extract the features when classifying the images, but the feature recognition rate obtained by the traditional method is lower, and the traditional classification method has general accuracy when classifying, so that the classification accuracy of the related technology is lower.
Therefore, how to solve the problem of low classification accuracy in the related art is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, an object of the present invention is to provide an image classification method, an image classification apparatus, an image classification device, and a computer-readable storage medium, which solve the problem of low classification accuracy in the related art.
In order to solve the above technical problems, the present invention provides an image classification method, including:
acquiring an image to be classified, and carrying out feature extraction processing on the image to be classified to obtain features to be classified;
determining a plurality of registration sample features corresponding to a plurality of registration categories, and performing feature calculation by using each registration sample feature and the feature to be classified to obtain a plurality of difference features; each of the registration categories corresponds to at least one of the registration sample features;
matching the difference features by using a classifier to obtain the similarity corresponding to each difference feature;
and determining target similarity by utilizing the similarity, and determining the category of the image to be classified as a target registration category corresponding to the target similarity.
Optionally, the matching processing is performed on the difference features by using a classifier to obtain the similarity corresponding to each difference feature, including:
inputting the difference features into the classifier to obtain a preset number of neighborhood voting results;
and obtaining the similarity according to the neighborhood voting result.
Optionally, the determining the target similarity by using the similarity includes:
Integrating the similarity to obtain a plurality of category similarity;
comparing each category similarity with a first threshold value, and determining the category similarity larger than the first threshold value as a candidate similarity;
when the number of candidate similarities is one, determining the candidate similarity as the target similarity;
when the number of the candidate similarity is greater than one, sequencing the candidate similarity according to the sequence from large to small to obtain a similarity sequence;
determining the similarity larger than a second threshold value as legal similarity, and counting the legal similarity quantity corresponding to each candidate similarity;
and adjusting the similarity sequence according to the sequence from the large to the small of the legal similarity number, and determining the first candidate similarity in the similarity sequence as the target similarity.
Optionally, the integrating processing is performed on each similarity to obtain a plurality of category similarities, including:
calculating an average value by using the similarity corresponding to the same registration category to obtain the category similarity corresponding to the registration category;
Or alternatively, the first and second heat exchangers may be,
determining the similarity weight corresponding to each similarity;
and based on the similarity weight, carrying out weighted average calculation by using the similarity corresponding to the same registration category to obtain the category similarity.
Optionally, the acquiring the image to be classified includes:
acquiring an original image and counting quality parameters of the original image;
when the quality parameter is in a preset interval, acquiring a weight coefficient, and calculating an evaluation score corresponding to the original image according to the weight coefficient;
when the evaluation score is larger than a preset evaluation threshold, preprocessing the original image to obtain the image to be classified; the preprocessing is any one or the combination of any several of median filtering, mean filtering, histogram equalization, bilateral filtering and Gaussian filtering.
Optionally, before the matching processing is performed on the difference features by using a classifier, the method further includes:
acquiring a plurality of training sample characteristics corresponding to each registration category;
forming positive sample data pairs by utilizing training sample features of any two same categories, and carrying out positive label processing on the positive sample data pairs;
utilizing any two training sample characteristics of different categories to form a negative sample data pair, and carrying out negative label processing on the negative sample data pair;
And training an initial classifier by using the positive sample data pair and the negative sample data pair to obtain the classifier.
Optionally, the method further comprises:
the registration sample features are sent to a cloud end, so that the cloud end can integrate the registration sample features with training sample features and then conduct classifier training;
and acquiring classifier parameters sent by the cloud, and updating the classifier by utilizing the classifier parameters.
The invention also provides an image classification device, which comprises:
the feature extraction module is used for acquiring an image to be classified, and carrying out feature extraction processing on the image to be classified to obtain features to be classified;
the difference feature acquisition module is used for determining a plurality of registration sample features corresponding to a plurality of registration categories, and performing feature calculation by using each registration sample feature and the feature to be classified to obtain a plurality of difference features; each of the registration categories corresponds to at least one of the registration sample features;
the similarity determining module is used for carrying out matching processing on the difference features by using a classifier to obtain the similarity corresponding to each difference feature;
and the classification module is used for determining target similarity by utilizing the similarity, and determining the category of the image to be classified as a target registration category corresponding to the target similarity.
The invention also provides an image classification device comprising a memory and a processor, wherein:
the memory is used for storing a computer program;
the processor is configured to execute the computer program to implement the image classification method described above.
The invention also provides a computer readable storage medium for storing a computer program, wherein the computer program is executed by a processor to implement the image classification method.
According to the image classification method provided by the invention, the image to be classified is obtained, and the feature extraction processing is carried out on the image to be classified to obtain the feature to be classified; determining a plurality of registration sample features corresponding to the registration categories, and performing feature calculation by using each registration sample feature and the feature to be classified to obtain a plurality of difference features; each registration category corresponds to at least one registration sample feature; matching the difference features by using a classifier to obtain the similarity corresponding to each difference feature; and determining the target similarity by using the similarity, and determining the category of the image to be classified as a target registration category corresponding to the target similarity.
Therefore, after the feature extraction is carried out on the image to be classified, the feature extraction is carried out on the second layer of features by utilizing the feature to be classified and the registered sample feature, namely, the feature calculation is carried out, so that the difference feature is obtained, the difference feature can reflect the feature of the image to be classified more than the feature to be classified, and the classification accuracy can be improved according to the classification. In addition, the classification accuracy of the traditional classification method is poor, and the classification accuracy is further improved by obtaining the similarity of each difference feature and determining the target similarity by using the similarity, wherein the category with the highest possibility can be selected from the categories to which each similarity belongs as the category of the image to be classified. By performing the second-layer feature extraction process and determining the category of the image to be classified by using the similarity, the classification accuracy can be improved, and the problem of low classification accuracy in the related technology is solved.
In addition, the invention also provides an image classification device, an image classification device and a computer readable storage medium, which also have the beneficial effects.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an image classification method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a specific target similarity determination method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a specific image obtaining method to be classified according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an image classification device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image classification device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a face recognition system according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a detection unit according to an embodiment of the present invention;
Fig. 8 is a schematic structural diagram of a binocular detection module according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an identification unit according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Specifically, in one possible implementation, please refer to fig. 1, fig. 1 is a flowchart of an image classification method provided in an embodiment of the present invention. The method comprises the following steps:
s101: and obtaining an image to be classified, and carrying out feature extraction processing on the image to be classified to obtain features to be classified.
In this embodiment, the device performing all or part of the steps of the image classification method may be referred to as the present device or the image classification device, and the specific type of the device is not limited, and may be, for example, a mobile terminal, for example, a handheld terminal of some type, for example, a mobile phone; or may be a non-mobile end such as a cloud end.
Specifically, the image to be classified is an image to be classified, for example, a face image, a plant image, or the like. The image to be classified may be an image directly acquired, that is, an image directly acquired by an image capturing apparatus or other similar image acquiring apparatuses; or the image may be a preprocessed image, that is, the original image is preprocessed after being acquired by the image acquisition device, the specific content of the preprocessing is not limited, the image to be classified is obtained after the preprocessing is finished, and the preprocessing may be completed by the device or may be completed by other devices or terminals.
The specific extraction method adopted in the feature extraction process is not limited, and may be, for example, a conventional feature extraction method based on a feature descriptor, for example, an LBP (local binary pattern, local Binary Patterns) feature extraction method, that is, a feature extraction method based on an LBP descriptor, or may be a feature extraction method based on a Haar (Haar wavelet transform, proposed by alfred Haar in 1909) descriptor, or may be a feature extraction method based on a HOG (histogram of oriented gradient histogram) descriptor; or may be a feature extraction method based on a deep learning method. Specifically, when the device is a mobile terminal, in order to improve the operation efficiency and reduce the occupancy rate of the memory, a traditional feature extraction method based on feature descriptors can be selected. After the feature extraction processing is performed on the image to be classified, the feature to be classified can be obtained, and the specific form and content of the feature to be classified are related to the classification algorithm and the image to be classified, which is not limited in this embodiment.
S102: and determining a plurality of registration sample characteristics corresponding to the registration categories, and performing characteristic calculation by using the registration sample characteristics and the characteristics to be classified respectively to obtain a plurality of difference characteristics.
The registration categories are the categories to which the images to be classified may belong, and the number of the registration categories is plural, and it is to be noted that each registration category corresponds to at least one registration sample feature. The registered sample features are generally features corresponding to images successfully classified by the device before the current classification, but may also include features pre-stored locally or features of images sent by other devices or terminals. The number of registration sample features corresponding to each registration category can be the same or different, so that each registration category is guaranteed to have enough registration sample features. To ensure the privacy of the registered sample features, they may be stored locally in an encrypted manner.
After the features to be classified are obtained, registered sample features may be determined to obtain difference features. Specifically, feature calculation is performed by using each registered sample feature and the feature to be classified, and the feature calculation can be specifically one or a combination of more of feature vector subtraction, feature vector averaging and feature vector deviation, and can be set according to actual needs. After the feature calculation is finished, a plurality of difference features can be obtained, the difference features correspond to the registered sample features one by one, and the specific content of the difference features is related to the feature calculation method, so that the difference features are not limited.
Through feature calculation, the features to be classified can be subjected to secondary feature extraction, so that the features of the features to be classified can be more fully reflected, and the classification accuracy is improved.
S103: and carrying out matching processing on the difference features by using a classifier to obtain the similarity corresponding to each difference feature.
The classifier is trained in advance and used for carrying out matching processing on the difference features to obtain the similarity corresponding to each difference feature. The specific content of the classifier is not limited, and for example, the classifier may be a classifier based on an SVM (Support Vector Machine ), or may be a classifier based on a cosine distance, or may be a classifier based on a euclidean distance, or may be a classifier based on LDA (Linear Discriminant Analysis ). Or may be a classifier employing a deep learning method, for example, a contrast Loss classifier, or may be a Triplet Loss classifier, or may be a Center Loss classifier, or may be an a-Softmax Loss (angular Softmax Loss) classifier.
The classifier is utilized to perform matching processing on the difference features, so that the similarity corresponding to the difference features can be calculated, and the similarity can represent the similarity degree between each registered sample feature and the feature to be classified, and the similarity can be a score of a percentage, or can be a percentage.
S104: and determining the target similarity by using the similarity, and determining the category of the image to be classified as a target registration category corresponding to the target similarity.
And after the similarity is obtained, integrating the similarity to obtain the category similarity, and determining the target similarity in the category similarity. It should be noted that, the category similarity is used to represent the possible degree that the feature to be classified belongs to a certain registration category, the target similarity corresponds to the registration category to which the feature to be classified most likely belongs, after determining the target similarity, the corresponding target registration category is determined, and the category of the image to be classified is determined as the target registration category, so as to complete the classification process of the image to be classified.
The target similarity is determined by utilizing the similarity, the result obtained after classification can be screened and determined for the second time, and the category with the highest possibility is selected from the categories (namely, each registration category) to which each similarity belongs as the category of the image to be classified, so that the classification accuracy is improved.
After the image classification method provided by the embodiment of the invention is applied, the image to be classified is subjected to feature extraction, and then the feature to be classified and the registered sample feature are utilized to carry out second-layer feature extraction, namely, feature calculation is carried out, so that differential features are obtained, the differential features can reflect the characteristics of the image to be classified more than the feature to be classified, and the classification accuracy can be improved according to classification. In addition, the classification accuracy of the traditional classification method is poor, and the classification accuracy is further improved by obtaining the similarity of each difference feature and determining the target similarity in the similarity, wherein the category with the highest possibility can be selected from the categories corresponding to each similarity as the category of the image to be classified. By performing the second-layer feature extraction process and determining the category of the image to be classified by using the similarity, the classification accuracy can be improved, and the problem of low classification accuracy in the related technology is solved.
Based on the above embodiment, in a possible implementation manner, after obtaining the similarity, the target similarity may be determined according to the first threshold and the second threshold so as to improve the accuracy of classifying the image to be classified, please refer to fig. 2, and fig. 2 is a flowchart of a specific target similarity determining method provided by an embodiment of the present invention, where the method includes:
s201: and inputting the difference characteristics into a classifier to obtain a preset number of neighborhood voting results.
In this embodiment, the classifier is trained in advance, which may be a KNN (k-Nearest Neighbor algorithm, nearest neighbor node algorithm) classifier. After the difference features are input into the classifier, the classifier is utilized to carry out preset number of neighborhood voting results of different neighborhoods, and the preset number of neighborhood voting results are obtained. The specific size of the preset number may be set according to actual needs, for example, may be set to 100, or may be set to 50. Specifically, when the result obtained after voting in a certain neighborhood is similar, the neighborhood voting result can be output as 1; when the result obtained after voting in a certain neighborhood is heterogeneous, the neighborhood voting result can be output to be 0. After each difference feature is input into the classifier, a preset number of neighborhood voting results can be obtained.
S202: and obtaining the similarity according to the neighborhood voting result.
And after the neighborhood voting result is obtained, counting the neighborhood voting result according to a similarity counting method, and obtaining the similarity. The embodiment of the similarity statistics method is not limited, for example, when the number of neighborhood voting results is 100, the number of neighborhood voting results, for example, 90, which is 1 can be counted, and the number can be directly determined as the similarity of the difference features, that is, the similarity is 90; or when the number of the neighborhood voting results is 50, the number of the neighborhood voting results with 1 can be counted, for example, 40, and the similarity is obtained by multiplying the number by a weight coefficient 2 and dividing the multiplied number by 100, namely, 40×2/100=0.8, and the similarity is 0.8. The similarity corresponding to each of the plurality of difference features can be obtained by using the methods in steps S201 and S202.
S203: and carrying out integration processing on each similarity to obtain a plurality of category similarities.
Since the similarity corresponds to the difference feature, and the difference feature corresponds to a certain registration category, in order to determine which registration category the image to be classified is more likely to belong to, the similarity needs to be integrated so as to obtain the similarity between each registration category and the feature to be classified, i.e. the category similarity. The category similarity represents the overall similarity degree between the feature to be classified and the registered sample feature in a certain registered category, and also represents the possibility degree that the image to be classified belongs to a certain registered category. The number of category similarities is the same as the number of registration categories, and may be, for example, M.
Specifically, the method of the integration treatment is various, and can be selected according to the needs. In one possible implementation, in order to ensure the speed of the integration process, the step S203 may include:
s2031: and calculating an average value by using the similarity corresponding to the same registration category to obtain the category similarity corresponding to the registration category.
In this embodiment, the average value of the similarities corresponding to the same registration category may be calculated, so that the category similarity corresponding to the registration category may be obtained, and the category similarity may be obtained only by one calculation operation, thereby ensuring the speed of the integration processing.
In another embodiment, in order to improve accuracy of the category similarity, a weighted average calculation method may be used to calculate the category similarity, and specifically, step S203 may include:
s2032: and determining the similarity weight corresponding to each similarity.
Because the quality of some registration feature samples is low, the reference value of the corresponding similarity is low, in order to calculate the category similarity more accurately, the corresponding similarity weight can be determined for each similarity in advance, the similarity weight can be set according to the actual situation, and the value of the similarity weight can be 0, 1.
S2033: and based on the similarity weight, carrying out weighted average calculation by using the similarity corresponding to the same registration category to obtain the category similarity.
And after the similarity weight is determined, carrying out weighted average calculation on the similarity of the same registration category based on the similarity weight, and obtaining the category similarity. The method can calculate the category similarity more accurately so as to improve the recognition accuracy.
S204: and comparing the similarity of each category with a first threshold value, and determining the similarity of the category larger than the first threshold value as a candidate similarity.
The first threshold is used for comparing with the category similarity to further determine candidate similarity, and when the category similarity is larger than the first threshold, the image to be classified possibly belongs to the registered category corresponding to the category similarity, so that the image to be classified is determined to be the candidate similarity. When the category similarity is not greater than the first threshold, the feature to be classified is lower in similarity with the registered sample feature in the registered category, so that the image to be classified cannot belong to the registered category, and the registered category is discarded. The specific size of the first threshold is not limited, and may be, for example, 90% of the maximum value of the category similarity.
S205: when the number of candidate similarities is one, the candidate similarity is determined as the target similarity.
When the candidate similarity is obtained, if only one candidate similarity exists, the fact that the image to be classified only belongs to the registration category is indicated, so that the candidate similarity is determined to be the target similarity, and the category of the image to be classified is subsequently determined to be the target registration category corresponding to the target similarity.
S206: and when the number of the candidate similarities is greater than one, sequencing the candidate similarities according to the sequence from large to small to obtain a similarity sequence.
When the number of the candidate similarities is larger than one, a plurality of candidates are indicated in the category to which the image to be classified possibly belongs, so that the similarity sequences are obtained after the candidate similarities are sorted in the order from large to small, the similarity sequences are adjusted by using a second threshold, and the target similarity is determined in the adjusted similarity sequences.
S207: and determining the similarity larger than a second threshold value as legal similarity, and counting the legal similarity quantity corresponding to each candidate similarity.
The second threshold is used to compare with the similarity to determine legal similarity. Because the features to be classified are likely to be similar to some registered sample features only in small parts and are dissimilar in main parts, the similarity between the features to be classified and the registered sample features is low, and the similarity between the features to be classified and the images corresponding to the registered sample features cannot be described, so that the features to be classified need to be filtered in the process of determining the target similarity, and the influence on the classification accuracy is avoided. Specifically, the second threshold value is utilized to compare with the similarity, the similarity larger than the second threshold value is determined as legal similarity, and the number of legal similarities corresponding to each candidate similarity, namely the number of legal similarities, is counted. The specific size of the second threshold is not limited in this embodiment, and may be set according to actual situations.
S208: and adjusting the similarity sequences according to the sequence from large to small in legal similarity quantity, and determining the first candidate similarity in the similarity sequences as the target similarity.
After the legal similarity number corresponding to each candidate similarity is obtained, the similarity sequence is adjusted according to the sequence from the big to the small of the legal similarity number, namely, the candidate similarity with the big legal similarity number is arranged forward, and when the legal similarity numbers of the plurality of candidate similarities are the same, the sequence is arranged according to the size of the candidate similarity from the big to the small, so that the adjustment of the similarity sequence is completed. And determining a first candidate similarity in the adjusted similarity sequence as a target similarity, wherein the first candidate similarity is the first candidate similarity in the similarity sequence.
Further, in order to reduce the length of the similarity sequence and reduce the time required for adjustment, after the legal similarity number is obtained, a third threshold value is utilized to compare with the legal similarity number, if the legal similarity number corresponding to a certain candidate similarity is smaller than the third threshold value, the candidate similarity can be removed from the similarity sequence, the length of the similarity sequence is reduced, and then the time required for subsequent adjustment is reduced. For example, when the number of similarities corresponding to each registration category is N, the third threshold may be determined to be N-1.
Based on the above embodiment, in one possible implementation manner, in order to avoid the waste of computing resources caused by classifying the image with poor quality, the preprocessing may be performed in the process of acquiring the image to be classified. Specifically, referring to fig. 3, fig. 3 is a flowchart of a specific image obtaining method to be classified according to an embodiment of the present invention, including:
s301: and acquiring an original image, and counting quality parameters of the original image.
In this embodiment, the original image is the image before the preprocessing. In order to ensure the quality of the image to be identified, and further, follow-up operation is not performed when the quality is low, the waste of computing resources is avoided, and the original image can be evaluated twice by utilizing the quality parameters and the evaluation scores. The quality parameter is used for first evaluation of the quality of the original image, and the specific content is not limited, and may be, for example, sharpness, brightness, compound gradient, and the like.
S302: when the quality parameter is in a preset interval, acquiring a weight coefficient, and calculating an evaluation score corresponding to the original image according to the weight coefficient.
In this embodiment, the luminance may be determined as a quality parameter. After the original image is acquired, calculating average brightness of each pixel in the original image, determining the average brightness as a quality parameter, and judging whether the average brightness is in a preset interval or not. The preset interval may be set to [50,140]. When the quality parameter is not within the preset interval, a preset operation may be performed, and the preset operation may be a re-acquisition of the original image or other operations, for example, no operation, and neither operation is performed; when the quality parameter is in the preset interval, the first evaluation is passed, so that the weight coefficient can be obtained and the corresponding evaluation score can be calculated.
The weight coefficient is trained in advance and used for generating an evaluation score, and the specific size of the weight coefficient is not limited in this embodiment. The weighting coefficients may be one or more, and may correspond to sharpness (sharp), composite gradient (multi-gradient), etc., respectively, for example. When calculating the evaluation score, the value of the quality parameter corresponding to the weight coefficient needs to be acquired first, in this embodiment, the sharpness and the composite gradient may be determined as the quality parameters, the weight coefficients corresponding to the sharpness and the composite gradient may be represented by w1 and w2, and the evaluation score S may be:
S=w1*sharpness+w2*multi-gradient,S∈(0,1)
s303: and when the evaluation score is larger than a preset evaluation threshold, preprocessing the original image to obtain an image to be classified.
The preset evaluation threshold is used for carrying out second evaluation on the original image together with the evaluation score, when the evaluation score is larger than the preset evaluation threshold, the original image can be determined to have higher quality and can be classified, so that the original image can be preprocessed to obtain the image to be classified, and the preprocessing is any one or combination of any several of median filtering, mean filtering, histogram equalization, bilateral filtering and Gaussian filtering. The specific size of the preset evaluation threshold is not limited, and may be set to 0.8, for example.
Based on the above embodiment, before the classifier is used to perform the matching process on the difference features, the classifier may be trained, specifically including:
step 1: and acquiring a plurality of training sample characteristics corresponding to each registration category.
The training sample features are used to train the classifier, and the specific number of the training sample features is not limited in this embodiment. In order to achieve a better training effect and ensure classification accuracy, in this embodiment, the same number of training sample features are allocated to each registration class, for example, when there are M registration classes, one registration class corresponds to N training sample features.
Step 2: and forming positive sample data pairs by utilizing training sample features of any two same categories, and carrying out positive label processing on the positive sample data pairs.
Specifically, training sample features belonging to the same registration category are paired to form positive sample data pairs. For example, for N training sample features of class i registration samples, i ε 1,2, … M, use C 2 iN And the corresponding positive sample data pair is obtained and subjected to positive label processing, and the positive label can be 1. Thus, a total of T positive sample data pairs, T=C, can be obtained 2 1N +C 2 2N +…+C 2 MN
Step 3: and constructing a negative sample data pair by utilizing any two training sample characteristics of different categories, and carrying out negative label processing on the negative sample data pair.
Similarly to step 2, two training sample features respectively belonging to different registration categories are used to form a negative sample data pair, and negative label processing is performed on the negative sample data pair, where in this embodiment, the negative label may be 0. The training effect is guaranteed, and the number of negative sample data pairs can be T.
Step 4: training the initial classifier by using the positive sample data pair and the negative sample data pair to obtain the classifier.
After the positive sample data pair and the negative sample data pair are obtained, training the initial classifier by using the positive sample data pair and the negative sample data pair, and obtaining the classifier after training is finished.
In another possible implementation manner, training of the classifier can be completed at the cloud end, namely, training sample features are sent to the cloud end, classifier parameters sent by the cloud end are obtained after training is completed, and the initial classifier is set by using the classifier parameters to obtain the classifier.
Further, to ensure classification accuracy, the classifier may be updated after a period of time, specifically:
step 5: and sending the registered sample characteristics to the cloud end so that the cloud end can integrate the registered sample characteristics with the training sample characteristics and then perform classifier training.
In this embodiment, in order to increase the update speed of the classifier, the update training process of the classifier may be completed by using a cloud with more powerful computing power. Specifically, the registration sample feature may be sent to the cloud end, so that the cloud end obtains new training data, that is, the registration sample feature, and integrates the registration sample feature and the training sample feature, and trains the classifier after integration. The registration sample features are used for participating in updating training of the classifier, so that training of needle dialogue can be performed on the classifier, and a better classifying effect is achieved.
Step 6: and acquiring classifier parameters sent by the cloud and updating the classifier by using the classifier parameters.
After training, the cloud end sends the classifier parameters, and after the classifier parameters are obtained, the device can update the classifier by using the classifier parameters to finish updating the classifier. Further, in order to ensure the privacy of the registered sample features, a deletion instruction can be sent to the cloud after the classifier is updated, so that the cloud deletes the acquired registered sample features.
Based on the above embodiment, a specific implementation will be described in this embodiment to describe the above method, where this embodiment is an application of the above method in face recognition and classification. Referring to fig. 6, fig. 6 is a schematic structural diagram of a face recognition system according to an embodiment of the present invention. The face recognition system 600 includes a cloud end and a mobile end, the mobile end includes a system control unit 601, a detection unit 602, a quality evaluation unit 603, a feature extraction unit 604, an identification unit 605 and a mobile end memory 606, wherein the mobile end memory 606 includes a registration sample feature and a tag (i.e. a registration category). The cloud includes a quality analysis unit 607 and a data updating and encrypting unit 608.
The system control unit 601 is configured to control the image acquisition device to acquire an original image of a map, send the original image to the detection unit 602, and receive control instructions of other units. Specifically, the original image may be acquired according to the first frame frequency per N frames per second, and when the instruction to stop acquiring is not received, or when the instruction to continuously acquire is acquired, the image is continuously acquired, the original image is acquired according to the second frame frequency after the first time length S seconds is reached, and the original image is stopped to be acquired after the second time length S seconds is reached.
The detection unit 602 is configured to perform face recognition on an image, determine whether an original image is a face image, and specifically, please refer to fig. 7, fig. 7 is a schematic structural diagram of a detection unit according to an embodiment of the present invention. The face detection module 701 is configured to perform pre-detection, that is, detect whether a possible face exists in an original image, and determine whether the detection is successful in the face detection determination module 702. If the face is not detected, the detection is unsuccessful, and the feedback fails, which may be sending a continuously acquired instruction to the system control unit 601 so as to acquire the original image, or waiting for the original image sent by the system control unit 601 without any operation. If the detection is successful, the nose and mouth detection module 703 may be used to detect nose and mouth, and the nose and mouth detection judging module 704 may be used to judge whether the detection is successful, if the detection is unsuccessful, the feedback is failed, if the detection is successful, the original image may be determined as the image to be identified, the image enters the binocular detection module 705, and if the detection is unsuccessful, the feedback is failed, if the detection is successful, the face correction module 707 may be used to correct the face image, and the success may also be fed back to the system control unit 601, so that the original image is stopped being acquired.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a binocular detection module according to an embodiment of the present invention. In the case of binocular detection, left eye monocular detection, which is monocular detection based on the left eye region, and right eye monocular detection, which is monocular detection based on the right eye region, are performed using 801 and 807, respectively. Whether the detection is successful is determined using 802 and 808, and if so, the left eye coordinates in the left eye region or the right eye coordinates in the right eye region can be obtained using 806 and 811. If any single eye detection is unsuccessful, 803 and/or 809 are used to detect eyes, 804 and/or 810 are used to determine whether the double eye detection is successful, and if the detection is successful, the left eye coordinates matched with the left eye area or the right eye coordinates matched with the right eye area can be obtained. If not, the system control unit 601 may be informed 805 that the feedback failed.
After the eyes are successfully detected, the original image may be input to the face correction module 707 to perform affine transformation correction processing and face clipping processing, so as to obtain a face image. The quality evaluation unit 603 is used for inputting the quality of the face image to the quality evaluation unit 603, specifically, the quality parameter of the face image can be counted, whether the quality parameter is in a preset interval or not is judged, if the quality parameter is in the preset interval, a weight coefficient is obtained, an evaluation score corresponding to the face image is calculated according to the weight coefficient, and when the evaluation score is greater than a preset evaluation threshold, the face image is determined to be the image to be classified. It should be noted that, the weight coefficient required by the quality evaluation unit 603 may be trained in the quality analysis unit 607 of the cloud, and after the training is finished, the weight coefficient is sent to the mobile terminal by the cloud. Specifically, the mobile terminal may acquire P face images and send the P face images to the cloud, where P may be equal to 1000. And the cloud performs quality analysis training on the image according to the quality analysis training model to obtain a weight coefficient.
After the image to be recognized is obtained, it is subjected to feature extraction by the feature extraction unit 604, the feature is recognized, and it is input to the recognition unit 605. Referring to fig. 9, fig. 9 is a schematic structural diagram of an identification unit according to an embodiment of the invention. The identification unit 605 may obtain registered feature samples from the mobile terminal memory using 901, where M classes are total, N for each class. The mathematical operation, i.e. the calculation of the difference features, is performed by 902, and the operation method may be vector subtraction, vector averaging, vector deviation or a combination of multiple calculation methods. Samples of the features to be measured (i.e., the difference features) are collected 903, and the classifier is used to obtain the similarity corresponding to each difference feature 904. Specifically, the difference features are input into a classifier to obtain a preset number of neighborhood voting results, and the similarity is obtained according to the neighborhood voting results. Similarity threshold (i.e., first threshold) detection and category threshold (i.e., second threshold) detection are performed on similarity using 905 and 906, respectively. Specifically, the similarity of each category is integrated to obtain a plurality of category similarities, the category similarities are compared with a first threshold, category similarities larger than the first threshold are determined to be candidate similarities, when the number of the candidate similarities is one, the candidate similarities are determined to be target similarities, when the number of the candidate similarities is larger than one, the candidate similarities are ordered according to the sequence from large to small to obtain a similarity sequence, the similarities larger than a second threshold are determined to be legal similarities, the legal similarity number corresponding to each candidate similarity is counted, the similarity sequence is adjusted according to the sequence from large to small according to the legal similarity number, and the first candidate similarity in the similarity sequence is determined to be the target similarity. After the detection is completed, a classification result is output by utilizing 907, namely, the class of the image to be classified is determined as a target registration class corresponding to the target similarity.
It should be noted that, the classifier of the mobile terminal may train and update the data updating and encrypting unit in the cloud. Specifically, during initial training, a plurality of training sample features corresponding to each registration category can be obtained, positive sample data pairs are formed by utilizing any two training sample features in the same category, positive label processing is carried out on the positive sample data pairs, negative sample data pairs are formed by utilizing any two training sample features in different categories, negative label processing is carried out on the negative sample data pairs, and initial classifiers are trained by utilizing the positive sample data pairs and the negative sample data pairs, so that classifiers are obtained; or the initial training can be carried out on the cloud, namely the training sample characteristics are sent to the cloud, the classifier parameters sent by the cloud are obtained after the training is completed, and the initial classifier is set by utilizing the classifier parameters, so that the classifier is obtained. Further, when updating training, the registered sample features can be sent to the cloud end, so that the cloud end can integrate the registered sample features and the training sample features and then perform classifier training, acquire classifier parameters sent by the cloud end, and update the classifier by utilizing the classifier parameters. Meanwhile, in order to ensure the privacy of the registered sample features, a deleting instruction can be sent to the cloud after the classifier is updated, so that the cloud can delete the acquired registered sample features.
The image classification device provided in the embodiment of the present invention is described below, and the image classification device described below and the image classification method described above may be referred to correspondingly.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an image classification device according to an embodiment of the present invention, including:
the feature extraction module 410 is configured to obtain an image to be classified, and perform feature extraction processing on the image to be classified to obtain features to be classified;
the difference feature obtaining module 420 is configured to determine a plurality of registration sample features corresponding to a plurality of registration categories, and perform feature calculation by using each registration sample feature and a feature to be classified, so as to obtain a plurality of difference features; each registration category corresponds to at least one registration sample feature;
the similarity determining module 430 is configured to perform matching processing on the difference features by using a classifier, so as to obtain a similarity corresponding to each difference feature;
the classification module 440 is configured to determine a target similarity by using the similarity, and determine a class of the image to be classified as a target registration class corresponding to the target similarity.
Optionally, the similarity determination module 430 includes:
the voting result statistics unit is used for inputting the difference characteristics into the classifier to obtain a preset number of neighborhood voting results;
And the similarity acquisition unit is used for acquiring the similarity according to the neighborhood voting result.
Optionally, the classification module 440 includes:
the integration processing unit is used for carrying out integration processing on each similarity to obtain a plurality of category similarities;
the candidate similarity determining unit is used for comparing the similarity of each category with a first threshold value and determining the category similarity larger than the first threshold value as the candidate similarity;
a first determining unit configured to determine the candidate similarity as a target similarity when the number of candidate similarities is one;
the sorting unit is used for sorting the candidate similarity according to the sequence from big to small when the number of the candidate similarity is more than one, so as to obtain a similarity sequence;
the legal similarity statistics unit is used for determining the similarity larger than a second threshold value as legal similarity and counting the legal similarity quantity corresponding to each candidate similarity;
and the second determining unit is used for adjusting the similarity sequences according to the sequence from the large to the small of the legal similarity number and determining the first candidate similarity in the similarity sequences as the target similarity.
Optionally, the integrated processing unit comprises:
the first calculating subunit is used for calculating an average value by using the similarity corresponding to the same registration category to obtain the category similarity corresponding to the registration category;
Or alternatively, the first and second heat exchangers may be,
the weight determining subunit is used for determining the similarity weight corresponding to each similarity;
and the second calculating subunit is used for carrying out weighted average calculation by using the similarity corresponding to the same registration category based on the similarity weight to obtain the category similarity.
Optionally, the feature extraction module 410 includes:
the quality parameter statistics module is used for acquiring an original image and counting the quality parameters of the original image;
the evaluation score calculation module is used for acquiring a weight coefficient when the quality parameter is in a preset interval and calculating an evaluation score corresponding to the original image according to the weight coefficient;
the preprocessing module is used for preprocessing the original image to obtain an image to be classified when the evaluation score is larger than a preset evaluation threshold value; the preprocessing is any one or the combination of any several of median filtering, mean filtering, histogram equalization, bilateral filtering and Gaussian filtering.
Optionally, the method further comprises:
the training sample feature acquisition module is used for acquiring a plurality of training sample features corresponding to each registration category;
the positive sample data pair acquisition module is used for forming positive sample data pairs by utilizing any two training sample characteristics in the same category and carrying out positive label processing on the positive sample data pairs;
The negative sample data pair acquisition module is used for forming a negative sample data pair by utilizing any two training sample characteristics of different categories and carrying out negative label processing on the negative sample data pair;
and the classifier training module is used for training the initial classifier by utilizing the positive sample data pair and the negative sample data pair to obtain the classifier.
Optionally, the method further comprises:
the sending module is used for sending the registered sample characteristics to the cloud end so that the cloud end can integrate the registered sample characteristics with the training sample characteristics and then train the classifier;
and the updating module is used for acquiring classifier parameters sent by the cloud and updating the classifier by utilizing the classifier parameters.
After the image classification device provided by the embodiment of the invention is used for extracting the characteristics of the image to be classified, the characteristics of the image to be classified can be reflected by the difference characteristics compared with the characteristics to be classified by utilizing the characteristics to be classified and the registered sample characteristics to carry out second-layer characteristic extraction, namely, carrying out characteristic calculation, so that the classification accuracy can be improved by classifying according to the characteristics to be classified. In addition, the classification accuracy of the traditional classification method is poor, and the classification accuracy is further improved by obtaining the similarity of each difference feature and determining the target similarity in the similarity, wherein the category with the highest possibility can be selected from the categories corresponding to each similarity as the category of the image to be classified. By performing the second-layer feature extraction process and determining the category of the image to be classified by using the similarity, the classification accuracy can be improved, and the problem of low classification accuracy in the related technology is solved.
The image classification apparatus provided in the embodiments of the present invention will be described below, and the image classification apparatus described below and the image classification method described above may be referred to correspondingly to each other.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an image classification apparatus according to an embodiment of the present invention. Wherein the image classification device 500 may include a processor 501 and a memory 502, and may further include one or more of a multimedia component 503, an information input/information output (I/O) interface 504, and a communication component 505.
Wherein the processor 501 is configured to control the overall operation of the image classification apparatus 500 to perform all or part of the steps in the image classification method described above; the memory 502 is used to store various types of data to support operation at the image classification device 500, which may include, for example, instructions for any application or method operating on the image classification device 500, as well as application-related data. The Memory 502 may be implemented by any type or combination of volatile or non-volatile Memory devices, such as one or more of static random access Memory (Static Random Access Memory, SRAM), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk.
The multimedia component 503 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen, the audio component being for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signals may be further stored in the memory 502 or transmitted through the communication component 505. The audio assembly further comprises at least one speaker for outputting audio signals. The I/O interface 504 provides an interface between the processor 501 and other interface modules, which may be a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 505 is used for wired or wireless communication between the image classification device 500 and other devices. Wireless communication, such as Wi-Fi, bluetooth, near field communication (Near Field Communication, NFC for short), 2G, 3G or 4G, or a combination of one or more thereof, the corresponding communication component 505 may thus comprise: wi-Fi part, bluetooth part, NFC part.
The image classification device 500 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated ASIC), digital signal processor (Digital Signal Processor, abbreviated DSP), digital signal processing device (Digital Signal Processing Device, abbreviated DSPD), programmable logic device (Programmable Logic Device, abbreviated PLD), field programmable gate array (Field Programmable Gate Array, abbreviated FPGA), controller, microcontroller, microprocessor, or other electronic components for performing the image classification method as set forth in the above embodiments.
The following describes a computer-readable storage medium provided in an embodiment of the present invention, and the computer-readable storage medium described below and the image classification method described above may be referred to correspondingly with each other.
The invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the image classification method described above.
The computer readable storage medium may include: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it is further noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms include, comprise, or any other variation is intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The image classification method, the image classification device, the image classification apparatus and the computer readable storage medium provided by the invention are described in detail, and specific examples are applied to illustrate the principles and the implementation of the invention, and the description of the examples is only used for helping to understand the method and the core idea of the invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (10)

1. An image classification method, comprising:
acquiring an image to be classified, and carrying out feature extraction processing on the image to be classified to obtain features to be classified;
determining a plurality of registration sample features corresponding to a plurality of registration categories, and performing feature calculation by using each registration sample feature and the feature to be classified to obtain a plurality of difference features; each of the registration categories corresponds to at least one of the registration sample features;
matching the difference features by using a classifier to obtain the similarity corresponding to each difference feature;
and determining target similarity by utilizing the similarity, and determining the category of the image to be classified as a target registration category corresponding to the target similarity.
2. The image classification method according to claim 1, wherein the matching processing of the difference features by using a classifier to obtain the similarity corresponding to each difference feature includes:
inputting the difference features into the classifier to obtain a preset number of neighborhood voting results;
and obtaining the similarity according to the neighborhood voting result.
3. The image classification method according to claim 1, wherein said determining a target similarity using said similarity comprises:
Integrating the similarity to obtain a plurality of category similarity;
comparing each category similarity with a first threshold value, and determining the category similarity larger than the first threshold value as a candidate similarity;
when the number of candidate similarities is one, determining the candidate similarity as the target similarity;
when the number of the candidate similarity is greater than one, sequencing the candidate similarity according to the sequence from large to small to obtain a similarity sequence;
determining the similarity larger than a second threshold value as legal similarity, and counting the legal similarity quantity corresponding to each candidate similarity;
and adjusting the similarity sequence according to the sequence from the large to the small of the legal similarity number, and determining the first candidate similarity in the similarity sequence as the target similarity.
4. The image classification method according to claim 3, wherein said integrating each of said similarities to obtain a plurality of category similarities comprises:
calculating an average value by using the similarity corresponding to the same registration category to obtain the category similarity corresponding to the registration category;
Or alternatively, the first and second heat exchangers may be,
determining the similarity weight corresponding to each similarity;
and based on the similarity weight, carrying out weighted average calculation by using the similarity corresponding to the same registration category to obtain the category similarity.
5. The image classification method according to claim 1, wherein the acquiring the image to be classified includes:
acquiring an original image and counting quality parameters of the original image;
when the quality parameter is in a preset interval, acquiring a weight coefficient, and calculating an evaluation score corresponding to the original image according to the weight coefficient;
when the evaluation score is larger than a preset evaluation threshold, preprocessing the original image to obtain the image to be classified; the preprocessing is any one or the combination of any several of median filtering, mean filtering, histogram equalization, bilateral filtering and Gaussian filtering.
6. The image classification method according to claim 1, characterized by further comprising, before the matching process of the difference feature with a classifier:
acquiring a plurality of training sample characteristics corresponding to each registration category;
forming positive sample data pairs by utilizing training sample features of any two same categories, and carrying out positive label processing on the positive sample data pairs;
Utilizing any two training sample characteristics of different categories to form a negative sample data pair, and carrying out negative label processing on the negative sample data pair;
and training an initial classifier by using the positive sample data pair and the negative sample data pair to obtain the classifier.
7. The image classification method according to claim 1, characterized by further comprising:
the registration sample features are sent to a cloud end, so that the cloud end can integrate the registration sample features with training sample features and then conduct classifier training;
and acquiring classifier parameters sent by the cloud, and updating the classifier by utilizing the classifier parameters.
8. An image classification apparatus, comprising:
the feature extraction module is used for acquiring an image to be classified, and carrying out feature extraction processing on the image to be classified to obtain features to be classified;
the difference feature acquisition module is used for determining a plurality of registration sample features corresponding to a plurality of registration categories, and performing feature calculation by using each registration sample feature and the feature to be classified to obtain a plurality of difference features; each of the registration categories corresponds to at least one of the registration sample features;
The similarity determining module is used for carrying out matching processing on the difference features by using a classifier to obtain the similarity corresponding to each difference feature;
and the classification module is used for determining target similarity by utilizing the similarity, and determining the category of the image to be classified as a target registration category corresponding to the target similarity.
9. An image classification device comprising a memory and a processor, wherein:
the memory is used for storing a computer program;
the processor for executing the computer program to implement the image classification method according to any one of claims 1 to 7.
10. A computer-readable storage medium for storing a computer program, wherein the computer program when executed by a processor implements the image classification method according to any one of claims 1 to 7.
CN202010476801.0A 2020-05-29 2020-05-29 Image classification method, device, equipment and readable storage medium Active CN111626371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010476801.0A CN111626371B (en) 2020-05-29 2020-05-29 Image classification method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010476801.0A CN111626371B (en) 2020-05-29 2020-05-29 Image classification method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111626371A CN111626371A (en) 2020-09-04
CN111626371B true CN111626371B (en) 2023-10-31

Family

ID=72271956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010476801.0A Active CN111626371B (en) 2020-05-29 2020-05-29 Image classification method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111626371B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200011B (en) * 2020-09-15 2023-08-18 深圳市水务科技有限公司 Aeration tank state detection method, system, electronic equipment and storage medium
CN112148907A (en) * 2020-10-23 2020-12-29 北京百度网讯科技有限公司 Image database updating method and device, electronic equipment and medium
CN112614109B (en) * 2020-12-24 2024-06-07 四川云从天府人工智能科技有限公司 Image quality evaluation method, apparatus and computer readable storage medium
CN112668488A (en) * 2020-12-30 2021-04-16 湖北工程学院 Method and system for automatically identifying seeds and electronic equipment
CN112463972B (en) * 2021-01-28 2021-05-18 成都数联铭品科技有限公司 Text sample classification method based on class imbalance
CN112966724B (en) * 2021-02-07 2024-04-09 惠州市博实结科技有限公司 Method and device for classifying image single categories
CN113178248A (en) * 2021-04-28 2021-07-27 联仁健康医疗大数据科技股份有限公司 Medical image database establishing method, device, equipment and storage medium
CN113963197A (en) * 2021-09-29 2022-01-21 北京百度网讯科技有限公司 Image recognition method and device, electronic equipment and readable storage medium
CN114064738B (en) * 2022-01-14 2022-04-29 杭州捷配信息科技有限公司 Electronic component substitute material searching method and device and application
CN114972883B (en) * 2022-06-17 2024-05-10 平安科技(深圳)有限公司 Target detection sample generation method based on artificial intelligence and related equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017024963A1 (en) * 2015-08-11 2017-02-16 阿里巴巴集团控股有限公司 Image recognition method, measure learning method and image source recognition method and device
CN108156519A (en) * 2017-12-25 2018-06-12 深圳Tcl新技术有限公司 Image classification method, television equipment and computer readable storage medium
TW201828156A (en) * 2017-01-19 2018-08-01 阿里巴巴集團服務有限公司 Image identification method, measurement learning method, and image source identification method and device capable of effectively dealing with the problem of asymmetric object image identification so as to possess better robustness and higher accuracy
CN108985159A (en) * 2018-06-08 2018-12-11 平安科技(深圳)有限公司 Human-eye model training method, eye recognition method, apparatus, equipment and medium
CN109522942A (en) * 2018-10-29 2019-03-26 中国科学院深圳先进技术研究院 A kind of image classification method, device, terminal device and storage medium
CN111191067A (en) * 2019-12-25 2020-05-22 深圳市优必选科技股份有限公司 Picture book identification method, terminal device and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017024963A1 (en) * 2015-08-11 2017-02-16 阿里巴巴集团控股有限公司 Image recognition method, measure learning method and image source recognition method and device
CN106446754A (en) * 2015-08-11 2017-02-22 阿里巴巴集团控股有限公司 Image identification method, metric learning method, image source identification method and devices
TW201828156A (en) * 2017-01-19 2018-08-01 阿里巴巴集團服務有限公司 Image identification method, measurement learning method, and image source identification method and device capable of effectively dealing with the problem of asymmetric object image identification so as to possess better robustness and higher accuracy
CN108156519A (en) * 2017-12-25 2018-06-12 深圳Tcl新技术有限公司 Image classification method, television equipment and computer readable storage medium
CN108985159A (en) * 2018-06-08 2018-12-11 平安科技(深圳)有限公司 Human-eye model training method, eye recognition method, apparatus, equipment and medium
CN109522942A (en) * 2018-10-29 2019-03-26 中国科学院深圳先进技术研究院 A kind of image classification method, device, terminal device and storage medium
CN111191067A (en) * 2019-12-25 2020-05-22 深圳市优必选科技股份有限公司 Picture book identification method, terminal device and computer readable storage medium

Also Published As

Publication number Publication date
CN111626371A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111626371B (en) Image classification method, device, equipment and readable storage medium
US10402627B2 (en) Method and apparatus for determining identity identifier of face in face image, and terminal
US11527055B2 (en) Feature density object classification, systems and methods
CN109697416B (en) Video data processing method and related device
WO2020244071A1 (en) Neural network-based gesture recognition method and apparatus, storage medium, and device
CN106372624B (en) Face recognition method and system
CN107423306B (en) Image retrieval method and device
JP7089045B2 (en) Media processing methods, related equipment and computer programs
Chokkadi et al. A Study on various state of the art of the Art Face Recognition System using Deep Learning Techniques
US20190205589A1 (en) Latent fingerprint ridge flow map improvement
US11132577B2 (en) System and a method for efficient image recognition
CN113807237B (en) Training of in vivo detection model, in vivo detection method, computer device, and medium
CN111626240A (en) Face image recognition method, device and equipment and readable storage medium
CN109711287B (en) Face acquisition method and related product
CN111079757A (en) Clothing attribute identification method and device and electronic equipment
JP2016071800A (en) Information processing device, information processing method, and program
CN115457595A (en) Method for associating human face with human body, electronic device and storage medium
CN113326829B (en) Method and device for recognizing gesture in video, readable storage medium and electronic equipment
JP2018036870A (en) Image processing device, and program
CN107729834B (en) Rapid iris detection method based on differential block characteristics
CN105760881A (en) Facial modeling detection method based on Haar classifier method
CN111353353A (en) Cross-posture face recognition method and device
CN112949363A (en) Face living body identification method and device
CN111382703A (en) Finger vein identification method based on secondary screening and score fusion
CN117173764A (en) Image recognition method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant