CN111626371A - Image classification method, device and equipment and readable storage medium - Google Patents

Image classification method, device and equipment and readable storage medium Download PDF

Info

Publication number
CN111626371A
CN111626371A CN202010476801.0A CN202010476801A CN111626371A CN 111626371 A CN111626371 A CN 111626371A CN 202010476801 A CN202010476801 A CN 202010476801A CN 111626371 A CN111626371 A CN 111626371A
Authority
CN
China
Prior art keywords
similarity
image
registration
classified
category
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010476801.0A
Other languages
Chinese (zh)
Other versions
CN111626371B (en
Inventor
白雨辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN202010476801.0A priority Critical patent/CN111626371B/en
Publication of CN111626371A publication Critical patent/CN111626371A/en
Application granted granted Critical
Publication of CN111626371B publication Critical patent/CN111626371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an image classification method, which comprises the following steps: acquiring an image to be classified, and performing feature extraction processing on the image to be classified to obtain features to be classified; determining a plurality of registration sample characteristics corresponding to a plurality of registration categories, and respectively utilizing each registration sample characteristic and the characteristic to be classified to perform characteristic calculation to obtain a plurality of difference characteristics; each registration category corresponds to at least one registration sample characteristic respectively; matching the difference features by using a classifier to obtain the similarity corresponding to each difference feature; determining the target similarity by using the similarity, and determining the category of the image to be classified as a target registration category corresponding to the target similarity; according to the method, the second-layer feature extraction is carried out, and the category with the highest possibility is selected from a plurality of categories to be used as the category of the image to be classified, so that the classification accuracy is improved; in addition, the invention also provides an image classification device, an image classification device and a computer readable storage medium, which also have the beneficial effects.

Description

Image classification method, device and equipment and readable storage medium
Technical Field
The present invention relates to the field of image classification technologies, and in particular, to an image classification method, an image classification device, an image classification apparatus, and a computer-readable storage medium.
Background
Feature extraction is a concept in computer vision and image processing. It refers to using a computer to extract image information and decide whether a point of each image belongs to an image feature. The common feature extraction algorithm is mainly divided into two major aspects, on one hand, feature extraction is carried out based on a matrix or a feature descriptor; another aspect is feature extraction based on a deep learning approach. Because the traditional classification method based on the feature descriptors has a small occupancy rate to the memory, the traditional method is often adopted to perform feature extraction when image classification is performed in the related technology, but the feature recognition rate obtained by the traditional method is low, and the accuracy of the traditional classification method is general when classification is performed, so that the classification accuracy of the related technology is low.
Therefore, how to solve the problem of low classification accuracy in the related art is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the present invention provides an image classification method, an image classification device, an image classification apparatus, and a computer-readable storage medium, which solve the problem of low classification accuracy in the related art.
In order to solve the above technical problem, the present invention provides an image classification method, including:
acquiring an image to be classified, and performing feature extraction processing on the image to be classified to obtain features to be classified;
determining a plurality of registration sample characteristics corresponding to a plurality of registration categories, and respectively utilizing each registration sample characteristic and the characteristic to be classified to perform characteristic calculation to obtain a plurality of difference characteristics; each of the registration categories corresponds to at least one of the registration sample features;
matching the difference features by using a classifier to obtain the similarity corresponding to each difference feature;
and determining the target similarity by using the similarity, and determining the category of the image to be classified as a target registration category corresponding to the target similarity.
Optionally, the performing, by using a classifier, matching the difference features to obtain a similarity corresponding to each difference feature includes:
inputting the difference features into the classifier to obtain a preset number of neighborhood voting results;
and obtaining the similarity according to the neighborhood voting result.
Optionally, the determining the target similarity by using the similarity includes:
integrating the similarity to obtain a plurality of category similarities;
comparing each class similarity with a first threshold, and determining the class similarity larger than the first threshold as a candidate similarity;
when the number of the candidate similarity is one, determining the candidate similarity as the target similarity;
when the number of the candidate similarities is more than one, sequencing the candidate similarities according to a descending order to obtain a similarity sequence;
determining the similarity greater than a second threshold as legal similarity, and counting the legal similarity quantity corresponding to each candidate similarity;
and adjusting the similarity sequence according to the descending order of the legal similarity number, and determining the first candidate similarity in the similarity sequence as the target similarity.
Optionally, the integrating the similarity to obtain the similarity of multiple categories includes:
calculating an average value by using the similarity corresponding to the same registration category to obtain the category similarity corresponding to the registration category;
or the like, or, alternatively,
determining similarity weights corresponding to the similarities;
and based on the similarity weight, carrying out weighted average calculation by using the similarity corresponding to the same registration category to obtain the category similarity.
Optionally, the acquiring an image to be classified includes:
acquiring an original image, and counting quality parameters of the original image;
when the quality parameters are in a preset interval, obtaining a weight coefficient, and calculating an evaluation score corresponding to the original image according to the weight coefficient;
when the evaluation score is larger than a preset evaluation threshold value, preprocessing the original image to obtain the image to be classified; the preprocessing is any one or combination of any several of median filtering, mean filtering, histogram equalization, bilateral filtering and Gaussian filtering.
Optionally, before the matching processing of the difference features by using the classifier, the method further includes:
obtaining a plurality of training sample characteristics corresponding to each registration type;
forming a positive sample data pair by using any two training sample characteristics of the same category, and performing positive label processing on the positive sample data pair;
forming a negative sample data pair by using any two training sample characteristics of different categories, and carrying out negative label processing on the negative sample data pair;
and training an initial classifier by using the positive sample data pair and the negative sample data pair to obtain the classifier.
Optionally, the method further comprises:
sending the registration sample features to a cloud end so that the cloud end can integrate the registration sample features and training sample features and then carry out classifier training;
and obtaining the classifier parameters sent by the cloud end, and updating the classifier by using the classifier parameters.
The present invention also provides an image classification apparatus, comprising:
the characteristic extraction module is used for acquiring an image to be classified and performing characteristic extraction processing on the image to be classified to obtain a characteristic to be classified;
the difference characteristic acquisition module is used for determining a plurality of registration sample characteristics corresponding to a plurality of registration categories, and respectively utilizing each registration sample characteristic and the characteristic to be classified to perform characteristic calculation to obtain a plurality of difference characteristics; each of the registration categories corresponds to at least one of the registration sample features;
a similarity determining module, configured to perform matching processing on the difference features by using a classifier to obtain similarities corresponding to the difference features;
and the classification module is used for determining the target similarity by utilizing the similarity and determining the class of the image to be classified as the target registration class corresponding to the target similarity.
The present invention also provides an image classification apparatus comprising a memory and a processor, wherein:
the memory is used for storing a computer program;
the processor is configured to execute the computer program to implement the image classification method.
The invention also provides a computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, implements the image classification method described above.
The image classification method provided by the invention comprises the steps of obtaining an image to be classified, and carrying out feature extraction processing on the image to be classified to obtain features to be classified; determining a plurality of registration sample characteristics corresponding to a plurality of registration categories, and respectively utilizing each registration sample characteristic and the characteristic to be classified to perform characteristic calculation to obtain a plurality of difference characteristics; each registration category corresponds to at least one registration sample characteristic respectively; matching the difference features by using a classifier to obtain the similarity corresponding to each difference feature; and determining the target similarity by using the similarity, and determining the category of the image to be classified as a target registration category corresponding to the target similarity.
Therefore, after the image to be classified is subjected to feature extraction, the feature to be classified and the feature of the registered sample are utilized to perform second-layer feature extraction, namely feature calculation is performed to obtain the difference feature, the difference feature can better reflect the characteristics of the image to be classified than the feature to be classified, and classification accuracy can be improved by classifying according to the difference feature. In addition, the classification accuracy of the traditional classification method is poor, and the classification with the highest possibility can be selected from the classes to which the similarity belongs as the class of the image to be classified by obtaining the similarity of each difference characteristic and determining the target similarity by using the similarity, so that the classification accuracy is further improved. By carrying out the processing of the second layer of feature extraction and determining the category of the image to be classified by utilizing the similarity, the classification accuracy can be improved, and the problem of lower classification accuracy in the related technology is solved.
In addition, the invention also provides an image classification device, an image classification device and a computer readable storage medium, which also have the beneficial effects.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of an image classification method according to an embodiment of the present invention;
fig. 2 is a flowchart of a specific target similarity determination method according to an embodiment of the present invention;
fig. 3 is a flowchart of a specific image obtaining method to be classified according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an image classification apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image classification device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a face recognition system according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a detecting unit according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a binocular detecting module according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an identification unit according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Specifically, in a possible implementation manner, please refer to fig. 1, and fig. 1 is a flowchart of an image classification method according to an embodiment of the present invention. The method comprises the following steps:
s101: and acquiring an image to be classified, and performing feature extraction processing on the image to be classified to obtain features to be classified.
In this embodiment, a device that performs all or part of the steps of the image classification method may be referred to as the present device or the image classification device, and the specific type of the device is not limited, for example, the device may be a mobile terminal, such as a handheld terminal of a certain type, such as a mobile phone; or may be a non-mobile terminal such as a cloud terminal.
Specifically, the image to be classified is an image to be classified, such as a human face image, a plant image, or the like. The image to be classified can be a directly acquired image, namely an image directly acquired by a camera device or other similar image acquisition devices; or the image may be a pre-processed image, that is, the original image is obtained by using the image obtaining device and then is pre-processed, the specific content of the pre-processing is not limited, the image to be classified is obtained after the pre-processing is finished, and the pre-processing may be finished by the device, or may be finished by other devices or terminals.
The embodiment does not limit the specific extraction method adopted in the feature extraction process, and for example, the specific extraction method may be a conventional feature extraction method based on a feature descriptor, such as an LBP (Local Binary pattern) feature extraction method, that is, a feature extraction method based on an LBP descriptor, or may be a feature extraction method based on a Haar (Haar wavelet transform, a wavelet transform method proposed by alfred Haar in 1909) descriptor, or may be a feature extraction method based on a HOG (histogram of oriented gradients) descriptor; or may be a feature extraction method based on a deep learning method. Specifically, when the device is a mobile terminal, in order to improve the operation efficiency and reduce the occupancy rate of the memory, a conventional feature extraction method based on a feature descriptor may be used. After the feature extraction processing is performed on the image to be classified, the feature to be classified can be obtained, and the specific form and content of the feature to be classified are related to the classification algorithm and the image to be classified, which is not limited in this embodiment.
S102: and determining a plurality of registration sample characteristics corresponding to a plurality of registration categories, and respectively utilizing each registration sample characteristic and the characteristic to be classified to perform characteristic calculation to obtain a plurality of difference characteristics.
The registration categories are categories to which the image to be classified may belong, and the number of the registration categories is multiple, and it should be noted that each registration category corresponds to at least one registration sample feature. The registered sample features are generally features corresponding to images successfully classified by the device before the current classification, but may also include features pre-stored locally or features of images sent by other devices or terminals. The number of the registration sample features corresponding to each registration category may be the same or different, so that each registration category is guaranteed to have enough registration sample features. To ensure privacy of the enrolled sample features, they may be stored encrypted locally.
After the features to be classified are obtained, the registered sample features can be determined so as to obtain the difference features. Specifically, feature calculation is performed by using each registered sample feature and a feature to be classified, the feature calculation may be one or a combination of a feature vector subtraction, a feature vector averaging, and a feature vector deviation, and may be set according to actual needs. After the feature calculation is finished, a plurality of difference features can be obtained, the difference features correspond to the registered sample features one to one, and the specific content of the difference features is related to the feature calculation method, which is not limited to this.
Through feature calculation, the feature to be classified can be subjected to secondary feature extraction, so that the features of the feature to be classified can be reflected more fully, and the classification accuracy is improved.
S103: and matching the difference features by using the classifier to obtain the similarity corresponding to each difference feature.
The classifier is trained in advance and used for matching the difference features to obtain the similarity corresponding to each difference feature. The specific content of the classifier is not limited, and for example, the classifier may be a classifier based on a SVM (Support Vector Machine), or may be a classifier based on a cosine distance, or may be a classifier based on a euclidean distance, or may be a classifier based on an LDA (Linear Discriminant Analysis). Or may be a classifier using a deep learning method, for example, may be a contrast Loss classifier, or may be a triple Loss classifier, or may be a Center Loss classifier, or may be an a-Softmax Loss classifier.
The classifier is used for matching the difference features, so that the similarity corresponding to the difference features can be calculated, the similarity can represent the similarity between each registered sample feature and the feature to be classified, and the similarity can be specifically a percentile score or a percentile score.
S104: and determining the target similarity by using the similarity, and determining the category of the image to be classified as a target registration category corresponding to the target similarity.
And after the similarity is obtained, integrating by using the similarity to obtain the category similarity, and determining the target similarity in the category similarity. It should be noted that the category similarity is used to indicate the possible degree of the feature to be classified belonging to a certain registration category, the target similarity corresponds to the registration category to which the feature to be classified most probably belongs, after the target similarity is determined, the corresponding target registration category is determined, the category of the image to be classified is determined as the target registration category, and the classification process of the image to be classified is completed.
By determining the target similarity by using the similarity, the secondary screening and determination can be performed on the classified results, and the class with the highest probability is selected from the classes (namely, the registered classes) to which the similarities belong as the class of the image to be classified, so that the classification accuracy is improved.
By applying the image classification method provided by the embodiment of the invention, after the features of the image to be classified are extracted, the features to be classified and the features of the registered sample are used for second-layer feature extraction, namely, feature calculation is carried out to obtain the difference features, the difference features can reflect the characteristics of the image to be classified better than the features to be classified, and the classification accuracy can be improved by classifying according to the difference features. In addition, the classification accuracy of the traditional classification method is poor, and the classification with the highest probability can be selected from the classes corresponding to the similarity as the class of the image to be classified by obtaining the similarity of each difference characteristic and determining the target similarity, so that the classification accuracy is further improved. By carrying out the processing of the second layer of feature extraction and determining the category of the image to be classified by utilizing the similarity, the classification accuracy can be improved, and the problem of lower classification accuracy in the related technology is solved.
Based on the foregoing embodiment, in a possible implementation manner, after obtaining the similarity, the target similarity may be determined according to the first threshold and the second threshold, so as to improve the classification accuracy of the image to be classified, please refer to fig. 2, where fig. 2 is a flowchart of a specific target similarity determining method provided in an embodiment of the present invention, and the method includes:
s201: and inputting the difference features into a classifier to obtain a preset number of neighborhood voting results.
In this embodiment, the classifier is trained in advance, and may be specifically a KNN (k-Nearest neighbor algorithm) classifier. After the difference features are input into the classifier, neighborhood voting results of different neighborhoods in preset number are carried out by the classifier, and the neighborhood voting results in the preset number are obtained. The specific size of the preset number may be set according to actual needs, for example, may be set to 100, or may be set to 50. Specifically, when the result obtained after voting the difference features in a certain neighborhood is of the same type, a neighborhood voting result may be output as 1; when the result obtained after voting in a certain neighborhood is heterogeneous, a neighborhood voting result can be output as 0. After each difference feature is input into the classifier, a preset number of neighborhood voting results can be obtained.
S202: and obtaining the similarity according to the neighborhood voting result.
And after the neighborhood voting result is obtained, counting the neighborhood voting result according to a similarity counting method, so that the similarity can be obtained. The specific content of the similarity statistical method is not limited in this embodiment, for example, when the number of the neighborhood voting results is 100, the number of the neighborhood voting results that is 1 in the neighborhood voting results may be counted, for example, 90, and the number may be directly determined as the similarity of the difference feature, that is, the similarity is 90; or when the number of the neighborhood voting results is 50, the number of the neighborhood voting results which is 1 may be counted, for example, 40, and the number is multiplied by a weight coefficient 2 and then divided by 100, so as to obtain the similarity, that is, 40 × 2/100 is 0.8, and then the similarity is 0.8. The similarity corresponding to each of the plurality of difference features can be obtained by the method in steps S201 and S202.
S203: and integrating the similarity to obtain the similarity of a plurality of categories.
Since the similarity respectively corresponds to the difference features, and the difference features respectively correspond to a certain registration category, in order to determine which registration category the image to be classified more likely belongs to, the similarity needs to be integrated so as to obtain the similarity between each registration category and the feature to be classified, that is, the category similarity. The category similarity represents the overall similarity degree of the features to be classified and the registered sample features in a certain registered category, and also represents the possible degree of the images to be classified belonging to a certain registered category. The number of category similarities is the same as the number of registered categories, and may be M, for example.
Specifically, there are various methods for the integration treatment, which can be selected as needed. In a possible embodiment, in order to ensure the speed of the integration process and obtain the category similarity as soon as possible, the step S203 may include:
s2031: and calculating the average value by using the similarity corresponding to the same registration category to obtain the category similarity corresponding to the registration category.
In this embodiment, the average value of the similarity degrees corresponding to the same registration category may be calculated, so that the category similarity degree corresponding to the registration category may be obtained, and the category similarity degree may be obtained only through one calculation operation, thereby ensuring the speed of the integration processing.
In another embodiment, in order to improve the accuracy of the category similarity, a weighted average calculation method may be used to calculate the category similarity, and specifically, the step S203 may include:
s2032: and determining the similarity weight corresponding to each similarity.
Because the quality of some registered feature samples is not high, the reference value of the corresponding similarity is also low, in order to calculate the category similarity more accurately, the corresponding similarity weight can be determined for each similarity in advance, the similarity weight can be set according to the practice, and the value can be set to [0, 1 ].
S2033: based on the similarity weight, the similarity corresponding to the same registration category is used for carrying out weighted average calculation to obtain the category similarity.
After the similarity weight is determined, the similarity of the same registration category is subjected to weighted average calculation based on the similarity weight, and the category similarity can be obtained. The method can calculate the category similarity more accurately so as to improve the identification accuracy.
S204: and comparing the similarity of each category with a first threshold, and determining the category similarity greater than the first threshold as a candidate similarity.
The first threshold is used for comparing with the category similarity, and then determining the candidate similarity, when the category similarity is larger than the first threshold, the image to be classified may belong to the registration category corresponding to the category similarity, and therefore the image to be classified is determined as the candidate similarity. When the class similarity is not greater than the first threshold, it indicates that the similarity between the feature to be classified and the feature of the registered sample in the registered class is low, so that the image to be classified is unlikely to belong to the registered class, and therefore the registered class is discarded. The specific size of the first threshold is not limited, and may be, for example, 90% of the maximum value of the category similarity.
S205: and when the number of the candidate similarity is one, determining the candidate similarity as the target similarity.
When the candidate similarity is obtained, if the number of the candidate similarities is only one, it is stated that the image to be classified only possibly belongs to the class of registration classes, and therefore the candidate similarity is determined as the target similarity, and subsequently the class of the image to be classified is determined as the target registration class corresponding to the target similarity.
S206: and when the number of the candidate similarities is more than one, sequencing the candidate similarities according to a descending order to obtain a similarity sequence.
When the number of the candidate similarities is more than one, it is indicated that the category to which the image to be classified possibly belongs has a plurality of candidates, and therefore, the candidate similarities are sequenced from large to small to obtain a similarity sequence, so that the similarity sequence is adjusted by using a second threshold, and the target similarity is determined in the adjusted similarity sequence.
S207: and determining the similarity greater than the second threshold as legal similarity, and counting the legal similarity quantity corresponding to each candidate similarity.
The second threshold is used to compare with the similarity in order to determine the legal similarity. Because the features to be classified and some registered sample features may only be similar in small parts and the main parts are not similar, the similarity between the features to be classified and some registered sample features is low, and it cannot be stated that the images to be classified are similar to the images corresponding to the registered sample features, so in the process of determining the similarity of the target, the target features need to be filtered, and the influence on the classification accuracy is avoided. Specifically, the second threshold is used for comparing with the similarity, the similarity larger than the second threshold is determined as legal similarity, and the number of legal similarities corresponding to the candidate similarities, that is, the number of legal similarities, is counted. The specific size of the second threshold is not limited in this embodiment, and may be set according to actual situations.
S208: and adjusting the similarity sequence according to the descending order of the legal similarity number, and determining the first candidate similarity in the similarity sequence as the target similarity.
After the legal similarity number corresponding to each candidate similarity is obtained, the similarity sequence is adjusted according to the sequence of the legal similarity numbers from large to small, namely the candidate similarities with large legal similarity number are ranked forwards, and when the legal similarity numbers of a plurality of candidate similarities are the same, the candidate similarities are ranked according to the sizes of the candidate similarities from large to small, so that the adjustment of the similarity sequence is completed. And determining the first candidate similarity in the adjusted similarity sequence as the target similarity, wherein the first candidate similarity is the candidate similarity ranked first in the similarity sequence.
Further, in order to reduce the length of the similarity sequence and reduce the time required for adjustment, the legal similarity number is obtained and compared with a third threshold, and if the legal similarity number corresponding to a certain candidate similarity is smaller than the third threshold, the candidate similarity can be removed from the similarity sequence, so that the length of the similarity sequence is reduced, and the time required for subsequent adjustment is further reduced. For example, when the number of similarities corresponding to each registration category is N, the third threshold may be determined to be N-1.
Based on the above embodiment, in a possible implementation manner, in order to avoid the waste of computing resources caused by classifying poor-quality images, preprocessing may be performed in the process of acquiring the images to be classified. Specifically, referring to fig. 3, fig. 3 is a flowchart of a specific method for acquiring an image to be classified according to an embodiment of the present invention, including:
s301: and acquiring an original image, and counting the quality parameters of the original image.
In this embodiment, the original image is the image before the preprocessing. In order to ensure the quality of the image to be identified and further avoid subsequent operation when the quality is low, the waste of computing resources is avoided, and the quality parameters and the evaluation scores can be used for carrying out two times of evaluation on the original image. The quality parameter is used for evaluating the quality of the original image for the first time, and the specific content is not limited, and may be, for example, sharpness, brightness, composite gradient, and the like.
S302: and when the quality parameters are in the preset interval, obtaining the weight coefficient, and calculating the evaluation score corresponding to the original image according to the weight coefficient.
In this embodiment, the brightness may be determined as the quality parameter. After the original image is obtained, the average brightness of each pixel in the original image is calculated, the average brightness is determined as a quality parameter, and whether the average brightness is in a preset interval or not is judged. The preset interval may be set to [50,140 ]. When the quality parameter is not in the preset interval, a preset operation can be executed, and the preset operation can be to acquire the original image again or other operations, such as no operation, and no operation is executed; when the quality parameter is in the preset interval, the first evaluation is passed, so that the weight coefficient can be obtained and the corresponding evaluation score can be calculated.
The weight coefficient is trained in advance to generate an evaluation score, and the specific size of the weight coefficient is not limited in this embodiment. The weighting factor may be one or more, and may correspond to sharpness (sharpness), complex-gradient (multi-gradient), or the like, respectively. In the evaluation score calculation, it is necessary to first obtain a value of a quality parameter corresponding to a weight coefficient, in this embodiment, the sharpness and the composite gradient may be determined as the quality parameters, and then the weight coefficients corresponding to the sharpness and the composite gradient may be represented by w1 and w2, and then the evaluation score S may be:
S=w1*sharpness+w2*multi-gradient,S∈(0,1)
s303: and when the evaluation score is larger than a preset evaluation threshold value, preprocessing the original image to obtain an image to be classified.
The preset evaluation threshold is used for carrying out secondary evaluation on the original image together with the evaluation score, when the evaluation score is larger than the preset evaluation threshold, the original image can be determined to be high in quality and can be classified by utilizing the evaluation threshold, so that the original image can be preprocessed to obtain an image to be classified, and the preprocessing is any one or combination of any several of median filtering, mean filtering, histogram equalization, bilateral filtering and Gaussian filtering. The specific size of the preset evaluation threshold is not limited, and may be set to 0.8, for example.
Based on the above embodiment, before the classifier is used to perform matching processing on the difference features, the classifier may be trained, specifically, the method includes:
step 1: and acquiring a plurality of training sample characteristics corresponding to each registration type.
The training sample features are used for training the classifier, and the specific number of the training sample features is not limited in this embodiment. In order to achieve a better training effect and ensure the classification accuracy, in this embodiment, the same number of training sample features are assigned to each registration category, for example, when there are M types of registration categories, one registration category corresponds to N training sample features.
Step 2: and forming a positive sample data pair by using any two training sample characteristics of the same category, and performing positive label processing on the positive sample data pair.
For example, for N training sample features of the i-th registration sample, i ∈ 1, 2, … M, C is adopted2 iNThe corresponding positive sample data pair is obtained through the combination of (1), and positive label processing is carried out on the positive sample data pair, wherein the positive label can be 1. Therefore, a total of T positive sample data pairs, T ═ C, can be obtained2 1N+C2 2N+…+C2 MN
And step 3: and forming a negative sample data pair by using any two training sample characteristics of different categories, and carrying out negative label processing on the negative sample data pair.
Similar to step 2, two training sample features respectively belonging to different registration categories are used to form a negative sample data pair, and negative label processing is performed on the negative sample data pair, in this embodiment, the negative label may be 0. Wei has guaranteed the training effect, the quantity of negative sample data pair can be T.
And 4, step 4: and training the initial classifier by using the positive sample data pair and the negative sample data pair to obtain the classifier.
And after the positive sample data pair and the negative sample data pair are obtained, training the initial classifier by using the positive sample data pair and the negative sample data pair, and obtaining the classifier after the training is finished.
In another possible implementation manner, training of the classifier can be completed at the cloud, that is, training sample features are sent to the cloud, classifier parameters sent by the cloud are obtained after training is completed, and the initial classifier is set by using the classifier parameters to obtain the classifier.
Further, in order to ensure the accuracy of classification, the classifier can be updated after a period of time, specifically:
and 5: and sending the registration sample characteristics to the cloud so that the cloud integrates the registration sample characteristics and the training sample characteristics and then carries out classifier training.
In this embodiment, in order to improve the update speed of the classifier, the update training process of the classifier can be completed by using a cloud with a higher computing power. Specifically, the registration sample features can be sent to the cloud so that the cloud can acquire new training data, namely the registration sample features, and integrate the registration sample features and the training sample features, and train the classifier after integration. The registered sample characteristics are used for participating in updating training of the classifier, and training aiming at conversation can be performed on the classifier, so that a better classification effect is achieved.
Step 6: and obtaining the classifier parameters sent by the cloud, and updating the classifier by using the classifier parameters.
The high in the clouds can send the classifier parameter after training, and this equipment can utilize it to update the classifier after obtaining the classifier parameter, accomplishes the update of sub-classifier. Further, in order to ensure privacy of the characteristics of the registration samples, a deletion instruction can be sent to the cloud after the classifier is updated, so that the cloud deletes the acquired characteristics of the registration samples.
Based on the above embodiment, the present embodiment will describe a specific implementation manner to describe the above method, and the present embodiment is an application of the above method in the aspect of face recognition and classification. Referring to fig. 6, fig. 6 is a schematic structural diagram of a face recognition system according to an embodiment of the present invention. The face recognition system 600 includes a cloud and a mobile terminal, where the mobile terminal includes a system control unit 601, a detection unit 602, a quality evaluation unit 603, a feature extraction unit 604, a recognition unit 605, and a mobile terminal memory 606, where the mobile terminal memory 606 includes a registered sample feature and a tag (i.e., a registered type). The cloud includes a quality analysis unit 607 and a data update and encryption unit 608.
The system control unit 601 is configured to control the image acquisition device to acquire an original image of the image, send the original image to the detection unit 602, and receive control instructions of other units. Specifically, the original image may be acquired first according to a first frame frequency of N frames per second, when the instruction to stop acquiring is not received, or the image may be continuously acquired when the instruction to stop acquiring is acquired, the original image may be acquired according to a second frame frequency after the first duration S seconds, and the original image may be stopped after the second duration S seconds.
The detecting unit 602 is configured to perform face recognition on an image, and determine whether an original image is a face image, specifically, refer to fig. 7, and fig. 7 is a schematic structural diagram of a detecting unit according to an embodiment of the present invention. The face detection module 701 is configured to perform pre-detection, that is, detect whether a possible face exists in an original image, and determine whether the detection is successful in the face detection determination module 702. If the face is not detected, the detection is unsuccessful, and the feedback fails, where the feedback failure may be to send a command for continuous acquisition to the system control unit 601 so as to acquire the original image, or to wait for the original image sent by the system control unit 601 without any operation. If the detection is successful, the oral-nasal detection module 703 may be used to perform oral-nasal detection, and the oral-nasal detection judgment module 704 is used to judge whether the detection is successful, if the detection is failed, the feedback is failed, if the detection is successful, the original image may be determined as the image to be recognized, the image enters the binocular detection module 705, and the binocular detection judgment module 706 is used to judge whether the detection is successful, if the detection is failed, the feedback is failed, if the detection is successful, the face image correction is performed by the face correction module 707, and if the detection is successful, the success may be fed back to the system control unit 601, so that the acquisition of the original image is stopped.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a binocular detecting module according to an embodiment of the present invention. When the binocular detection is performed, 801 and 807 are respectively used for performing left eye monocular detection and right eye monocular detection, wherein the left eye monocular detection is monocular detection based on a left eye region, and similarly, the right eye monocular detection is monocular detection based on a right eye region. Whether the detection is successful or not is judged by 802 and 808, and if the detection is successful, the left eye coordinate in the left eye area or the right eye coordinate in the right eye area can be obtained by 806 and 811. If any single-eye detection is unsuccessful, performing double-eye detection by using 803 and/or 809, and judging whether the double-eye detection is successful by using 804 and/or 810, wherein if the detection is successful, the left-eye coordinate matched with the left-eye area or the right-eye coordinate matched with the right-eye area can be obtained. If not, the system control unit 601 may be notified 805 that the feedback failed.
After the eyes are successfully detected, the original image may be input to the face rectification module 707 to perform affine transformation rectification and inner face clipping processing, so as to obtain an inner face image. The inner face image is input to the quality evaluation unit 603 for quality evaluation, specifically, the quality parameter of the inner face image may be counted, whether the quality parameter is in a preset interval or not is judged, if the quality parameter is in the preset interval, a weight coefficient is obtained, an evaluation score corresponding to the inner face image is calculated according to the weight coefficient, and when the evaluation score is greater than a preset evaluation threshold, the inner face image is determined as an image to be classified. It should be noted that the weight coefficient required by the quality evaluation unit 603 may be trained in the cloud quality analysis unit 607, and after the training is finished, the weight coefficient is sent to the mobile terminal by the cloud. Specifically, the mobile terminal can acquire P face images and send the face images to the cloud, and P can be equal to 1000. And the cloud performs quality analysis training on the image according to the quality analysis training model to obtain a weight coefficient.
After the image to be recognized is obtained, it is subjected to feature extraction by the feature extraction unit 604, and the obtained ground recognition feature is input to the recognition unit 605. Referring to fig. 9, fig. 9 is a schematic structural diagram of an identification unit according to an embodiment of the present invention. The identifying unit 605 may use 901 to obtain registered feature samples from the mobile terminal memory, where M classes, N, are shared. The mathematical operation is performed by using 902, that is, the calculation of the difference characteristic is performed, and the operation method can be vector subtraction, vector averaging, vector deviation or a combination of a plurality of calculation methods. Samples of the features to be measured (i.e., difference features) are collected by 903, and the similarity corresponding to each difference feature is obtained by using a classifier in 904. Specifically, the difference features are input into a classifier to obtain a preset number of neighborhood voting results, and the similarity is obtained according to the neighborhood voting results. Similarity is detected using 905 and 906 for a similarity threshold (i.e., a first threshold) and a category threshold (i.e., a second threshold), respectively. Specifically, the method includes the steps of integrating all similarity degrees to obtain a plurality of category similarity degrees, comparing all category similarity degrees with a first threshold, determining the category similarity degrees larger than the first threshold as candidate similarity degrees, determining the candidate similarity degrees as target similarity degrees when the number of the candidate similarity degrees is one, sorting all the candidate similarity degrees according to a descending order when the number of the candidate similarity degrees is larger than one to obtain a similarity sequence, determining the similarity degrees larger than a second threshold as legal similarity degrees, counting the legal similarity degrees corresponding to all the candidate similarity degrees, adjusting the similarity sequence according to the descending order of the legal similarity degrees, and determining the first candidate similarity degree in the similarity sequence as the target similarity degree. After the detection is finished, a classification result is output by using the 907, namely, the category of the image to be classified is determined as a target registration category corresponding to the target similarity.
It should be noted that the classifier of the mobile terminal can be trained and updated in the data updating and encrypting unit of the cloud terminal. Specifically, during initial training, training can be performed locally, that is, a plurality of training sample features corresponding to each registration category are obtained, a positive sample data pair is formed by using any two training sample features of the same category, positive label processing is performed on the positive sample data pair, a negative sample data pair is formed by using any two training sample features of different categories, negative label processing is performed on the negative sample data pair, and the initial classifier is trained by using the positive sample data pair and the negative sample data pair to obtain a classifier; or the initial training can be carried out at the cloud, namely the training sample characteristics are sent to the cloud, the classifier parameters sent by the cloud are obtained after the training is finished, and the initial classifier is set by utilizing the classifier parameters to obtain the classifier. Further, when updating training, the registration sample characteristics can be sent to the cloud end, so that the cloud end integrates the registration sample characteristics and the training sample characteristics and then carries out classifier training, classifier parameters sent by the cloud end are obtained, and the classifier is updated by utilizing the classifier parameters. Meanwhile, in order to ensure the privacy of the characteristics of the registration samples, a deleting instruction can be sent to the cloud end after the classifier is updated, so that the cloud end can delete the acquired characteristics of the registration samples.
In the following, the image classification apparatus provided by the embodiment of the present invention is introduced, and the image classification apparatus described below and the image classification method described above may be referred to correspondingly.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an image classification apparatus according to an embodiment of the present invention, including:
the feature extraction module 410 is configured to obtain an image to be classified, and perform feature extraction processing on the image to be classified to obtain features to be classified;
a difference feature obtaining module 420, configured to determine a plurality of registration sample features corresponding to a plurality of registration categories, and perform feature calculation by using each registration sample feature and a feature to be classified, respectively, to obtain a plurality of difference features; each registration category corresponds to at least one registration sample characteristic respectively;
a similarity determining module 430, configured to perform matching processing on the difference features by using a classifier to obtain similarities corresponding to the difference features;
the classifying module 440 is configured to determine the target similarity by using the similarity, and determine the category of the image to be classified as a target registration category corresponding to the target similarity.
Optionally, the similarity determining module 430 includes:
the voting result counting unit is used for inputting the difference characteristics into the classifier to obtain a preset number of neighborhood voting results;
and the similarity acquisition unit is used for acquiring the similarity according to the neighborhood voting result.
Optionally, the classification module 440 includes:
the integration processing unit is used for integrating the similarity degrees to obtain a plurality of category similarity degrees;
the candidate similarity determining unit is used for comparing each class similarity with a first threshold value and determining the class similarity larger than the first threshold value as a candidate similarity;
a first determining unit configured to determine the candidate similarity as a target similarity when the number of the candidate similarities is one;
the sorting unit is used for sorting the candidate similarities according to the sequence from large to small to obtain a similarity sequence when the number of the candidate similarities is larger than one;
the legal similarity counting unit is used for determining the similarity larger than the second threshold as legal similarity and counting the legal similarity quantity corresponding to each candidate similarity;
and the second determining unit is used for adjusting the similarity sequence according to the sequence of legal similarity quantity from large to small, and determining the first candidate similarity in the similarity sequence as the target similarity.
Optionally, an integrated processing unit comprising:
the first calculating subunit is used for calculating the average value by utilizing the similarity corresponding to the same registration category to obtain the category similarity corresponding to the registration category;
or the like, or, alternatively,
the weight determining subunit is used for determining similarity weights corresponding to the similarities;
and the second calculating subunit is used for performing weighted average calculation by using the similarity corresponding to the same registration category based on the similarity weight to obtain the category similarity.
Optionally, the feature extraction module 410 includes:
the quality parameter counting module is used for acquiring the original image and counting the quality parameters of the original image;
the evaluation score calculation module is used for acquiring a weight coefficient when the quality parameter is in a preset interval, and calculating an evaluation score corresponding to the original image according to the weight coefficient;
the preprocessing module is used for preprocessing the original image to obtain an image to be classified when the evaluation score is larger than a preset evaluation threshold; the preprocessing is any one or combination of any several of median filtering, mean filtering, histogram equalization, bilateral filtering and Gaussian filtering.
Optionally, the method further comprises:
the training sample characteristic acquisition module is used for acquiring a plurality of training sample characteristics corresponding to each registration type;
the positive sample data pair acquisition module is used for forming a positive sample data pair by using any two training sample characteristics of the same category and performing positive label processing on the positive sample data pair;
the negative sample data pair acquisition module is used for forming a negative sample data pair by using any two training sample characteristics of different categories and carrying out negative label processing on the negative sample data pair;
and the classifier training module is used for training the initial classifier by utilizing the positive sample data pair and the negative sample data pair to obtain the classifier.
Optionally, the method further comprises:
the sending module is used for sending the registration sample characteristics to the cloud so that the cloud integrates the registration sample characteristics and the training sample characteristics and then carries out classifier training;
and the updating module is used for acquiring the classifier parameters sent by the cloud end and updating the classifier by utilizing the classifier parameters.
By applying the image classification device provided by the embodiment of the invention, after the features of the image to be classified are extracted, the features to be classified and the features of the registered sample are used for second-layer feature extraction, namely, feature calculation is carried out to obtain the difference features, the difference features can reflect the characteristics of the image to be classified better than the features to be classified, and the classification accuracy can be improved by classifying according to the difference features. In addition, the classification accuracy of the traditional classification method is poor, and the classification with the highest probability can be selected from the classes corresponding to the similarity as the class of the image to be classified by obtaining the similarity of each difference characteristic and determining the target similarity, so that the classification accuracy is further improved. By carrying out the processing of the second layer of feature extraction and determining the category of the image to be classified by utilizing the similarity, the classification accuracy can be improved, and the problem of lower classification accuracy in the related technology is solved.
In the following, the image classification apparatus provided by the embodiment of the present invention is introduced, and the image classification apparatus described below and the image classification method described above may be referred to correspondingly.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an image classification apparatus according to an embodiment of the present invention. Wherein the image classification device 500 may include a processor 501 and a memory 502, and may further include one or more of a multimedia component 503, an information input/information output (I/O) interface 504, and a communication component 505.
The processor 501 is configured to control the overall operation of the image classification apparatus 500, so as to complete all or part of the steps in the image classification method; the memory 502 is used to store various types of data to support operation at the image classification device 500, which may include, for example, instructions for any application or method operating on the image classification device 500, as well as application-related data. The Memory 502 may be implemented by any type or combination of volatile and non-volatile Memory devices, such as one or more of Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic or optical disk.
The multimedia component 503 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 502 or transmitted through the communication component 505. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 504 provides an interface between the processor 501 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 505 is used for wired or wireless communication between the image classification device 500 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, or 4G, or a combination of one or more of them, so that the corresponding Communication component 505 may include: Wi-Fi part, Bluetooth part, NFC part.
The image classification Device 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic components, and is configured to perform the image classification method according to the above embodiments.
In the following, the computer-readable storage medium provided by the embodiment of the present invention is introduced, and the computer-readable storage medium described below and the image classification method described above may be referred to correspondingly.
The present invention also provides a computer-readable storage medium having a computer program stored thereon, which, when being executed by a processor, carries out the steps of the image classification method described above.
The computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relationships such as first and second, etc., are intended only to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms include, or any other variation is intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that includes a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The image classification method, the image classification device, the image classification apparatus, and the computer-readable storage medium provided by the present invention are described in detail above, and a specific example is applied in the present document to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. An image classification method, comprising:
acquiring an image to be classified, and performing feature extraction processing on the image to be classified to obtain features to be classified;
determining a plurality of registration sample characteristics corresponding to a plurality of registration categories, and respectively utilizing each registration sample characteristic and the characteristic to be classified to perform characteristic calculation to obtain a plurality of difference characteristics; each of the registration categories corresponds to at least one of the registration sample features;
matching the difference features by using a classifier to obtain the similarity corresponding to each difference feature;
and determining the target similarity by using the similarity, and determining the category of the image to be classified as a target registration category corresponding to the target similarity.
2. The image classification method according to claim 1, wherein the matching the difference features by using the classifier to obtain the similarity corresponding to each difference feature comprises:
inputting the difference features into the classifier to obtain a preset number of neighborhood voting results;
and obtaining the similarity according to the neighborhood voting result.
3. The image classification method according to claim 1, wherein the determining a target similarity using the similarity comprises:
integrating the similarity to obtain a plurality of category similarities;
comparing each class similarity with a first threshold, and determining the class similarity larger than the first threshold as a candidate similarity;
when the number of the candidate similarity is one, determining the candidate similarity as the target similarity;
when the number of the candidate similarities is more than one, sequencing the candidate similarities according to a descending order to obtain a similarity sequence;
determining the similarity greater than a second threshold as legal similarity, and counting the legal similarity quantity corresponding to each candidate similarity;
and adjusting the similarity sequence according to the descending order of the legal similarity number, and determining the first candidate similarity in the similarity sequence as the target similarity.
4. The image classification method according to claim 3, wherein the integrating the similarities to obtain the similarity of multiple categories comprises:
calculating an average value by using the similarity corresponding to the same registration category to obtain the category similarity corresponding to the registration category;
or the like, or, alternatively,
determining similarity weights corresponding to the similarities;
and based on the similarity weight, carrying out weighted average calculation by using the similarity corresponding to the same registration category to obtain the category similarity.
5. The image classification method according to claim 1, wherein the acquiring the image to be classified comprises:
acquiring an original image, and counting quality parameters of the original image;
when the quality parameters are in a preset interval, obtaining a weight coefficient, and calculating an evaluation score corresponding to the original image according to the weight coefficient;
when the evaluation score is larger than a preset evaluation threshold value, preprocessing the original image to obtain the image to be classified; the preprocessing is any one or combination of any several of median filtering, mean filtering, histogram equalization, bilateral filtering and Gaussian filtering.
6. The image classification method according to claim 1, further comprising, before the matching the difference features with the classifier:
obtaining a plurality of training sample characteristics corresponding to each registration type;
forming a positive sample data pair by using any two training sample characteristics of the same category, and performing positive label processing on the positive sample data pair;
forming a negative sample data pair by using any two training sample characteristics of different categories, and carrying out negative label processing on the negative sample data pair;
and training an initial classifier by using the positive sample data pair and the negative sample data pair to obtain the classifier.
7. The image classification method according to claim 1, further comprising:
sending the registration sample features to a cloud end so that the cloud end can integrate the registration sample features and training sample features and then carry out classifier training;
and obtaining the classifier parameters sent by the cloud end, and updating the classifier by using the classifier parameters.
8. An image classification apparatus, comprising:
the characteristic extraction module is used for acquiring an image to be classified and performing characteristic extraction processing on the image to be classified to obtain a characteristic to be classified;
the difference characteristic acquisition module is used for determining a plurality of registration sample characteristics corresponding to a plurality of registration categories, and respectively utilizing each registration sample characteristic and the characteristic to be classified to perform characteristic calculation to obtain a plurality of difference characteristics; each of the registration categories corresponds to at least one of the registration sample features;
a similarity determining module, configured to perform matching processing on the difference features by using a classifier to obtain similarities corresponding to the difference features;
and the classification module is used for determining the target similarity by utilizing the similarity and determining the class of the image to be classified as the target registration class corresponding to the target similarity.
9. An image classification device comprising a memory and a processor, wherein:
the memory is used for storing a computer program;
the processor for executing the computer program to implement the image classification method according to any one of claims 1 to 7.
10. A computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, implements the image classification method according to any one of claims 1 to 7.
CN202010476801.0A 2020-05-29 2020-05-29 Image classification method, device, equipment and readable storage medium Active CN111626371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010476801.0A CN111626371B (en) 2020-05-29 2020-05-29 Image classification method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010476801.0A CN111626371B (en) 2020-05-29 2020-05-29 Image classification method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111626371A true CN111626371A (en) 2020-09-04
CN111626371B CN111626371B (en) 2023-10-31

Family

ID=72271956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010476801.0A Active CN111626371B (en) 2020-05-29 2020-05-29 Image classification method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111626371B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112148907A (en) * 2020-10-23 2020-12-29 北京百度网讯科技有限公司 Image database updating method and device, electronic equipment and medium
CN112200011A (en) * 2020-09-15 2021-01-08 深圳市水务科技有限公司 Aeration tank state detection method and system, electronic equipment and storage medium
CN112463972A (en) * 2021-01-28 2021-03-09 成都数联铭品科技有限公司 Sample classification method based on class imbalance
CN112614109A (en) * 2020-12-24 2021-04-06 四川云从天府人工智能科技有限公司 Image quality evaluation method, device and computer readable storage medium
CN112668488A (en) * 2020-12-30 2021-04-16 湖北工程学院 Method and system for automatically identifying seeds and electronic equipment
CN112966724A (en) * 2021-02-07 2021-06-15 惠州市博实结科技有限公司 Method and device for classifying image single categories
CN113178248A (en) * 2021-04-28 2021-07-27 联仁健康医疗大数据科技股份有限公司 Medical image database establishing method, device, equipment and storage medium
CN113963197A (en) * 2021-09-29 2022-01-21 北京百度网讯科技有限公司 Image recognition method and device, electronic equipment and readable storage medium
CN114064738A (en) * 2022-01-14 2022-02-18 杭州捷配信息科技有限公司 Electronic component substitute material searching method and device and application
CN114972883A (en) * 2022-06-17 2022-08-30 平安科技(深圳)有限公司 Target detection sample generation method based on artificial intelligence and related equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017024963A1 (en) * 2015-08-11 2017-02-16 阿里巴巴集团控股有限公司 Image recognition method, measure learning method and image source recognition method and device
CN108156519A (en) * 2017-12-25 2018-06-12 深圳Tcl新技术有限公司 Image classification method, television equipment and computer readable storage medium
TW201828156A (en) * 2017-01-19 2018-08-01 阿里巴巴集團服務有限公司 Image identification method, measurement learning method, and image source identification method and device capable of effectively dealing with the problem of asymmetric object image identification so as to possess better robustness and higher accuracy
CN108985159A (en) * 2018-06-08 2018-12-11 平安科技(深圳)有限公司 Human-eye model training method, eye recognition method, apparatus, equipment and medium
CN109522942A (en) * 2018-10-29 2019-03-26 中国科学院深圳先进技术研究院 A kind of image classification method, device, terminal device and storage medium
CN111191067A (en) * 2019-12-25 2020-05-22 深圳市优必选科技股份有限公司 Picture book identification method, terminal device and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017024963A1 (en) * 2015-08-11 2017-02-16 阿里巴巴集团控股有限公司 Image recognition method, measure learning method and image source recognition method and device
CN106446754A (en) * 2015-08-11 2017-02-22 阿里巴巴集团控股有限公司 Image identification method, metric learning method, image source identification method and devices
TW201828156A (en) * 2017-01-19 2018-08-01 阿里巴巴集團服務有限公司 Image identification method, measurement learning method, and image source identification method and device capable of effectively dealing with the problem of asymmetric object image identification so as to possess better robustness and higher accuracy
CN108156519A (en) * 2017-12-25 2018-06-12 深圳Tcl新技术有限公司 Image classification method, television equipment and computer readable storage medium
CN108985159A (en) * 2018-06-08 2018-12-11 平安科技(深圳)有限公司 Human-eye model training method, eye recognition method, apparatus, equipment and medium
CN109522942A (en) * 2018-10-29 2019-03-26 中国科学院深圳先进技术研究院 A kind of image classification method, device, terminal device and storage medium
CN111191067A (en) * 2019-12-25 2020-05-22 深圳市优必选科技股份有限公司 Picture book identification method, terminal device and computer readable storage medium

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200011B (en) * 2020-09-15 2023-08-18 深圳市水务科技有限公司 Aeration tank state detection method, system, electronic equipment and storage medium
CN112200011A (en) * 2020-09-15 2021-01-08 深圳市水务科技有限公司 Aeration tank state detection method and system, electronic equipment and storage medium
CN112148907A (en) * 2020-10-23 2020-12-29 北京百度网讯科技有限公司 Image database updating method and device, electronic equipment and medium
CN112614109B (en) * 2020-12-24 2024-06-07 四川云从天府人工智能科技有限公司 Image quality evaluation method, apparatus and computer readable storage medium
CN112614109A (en) * 2020-12-24 2021-04-06 四川云从天府人工智能科技有限公司 Image quality evaluation method, device and computer readable storage medium
CN112668488A (en) * 2020-12-30 2021-04-16 湖北工程学院 Method and system for automatically identifying seeds and electronic equipment
CN112463972B (en) * 2021-01-28 2021-05-18 成都数联铭品科技有限公司 Text sample classification method based on class imbalance
CN112463972A (en) * 2021-01-28 2021-03-09 成都数联铭品科技有限公司 Sample classification method based on class imbalance
CN112966724A (en) * 2021-02-07 2021-06-15 惠州市博实结科技有限公司 Method and device for classifying image single categories
CN112966724B (en) * 2021-02-07 2024-04-09 惠州市博实结科技有限公司 Method and device for classifying image single categories
CN113178248A (en) * 2021-04-28 2021-07-27 联仁健康医疗大数据科技股份有限公司 Medical image database establishing method, device, equipment and storage medium
CN113963197A (en) * 2021-09-29 2022-01-21 北京百度网讯科技有限公司 Image recognition method and device, electronic equipment and readable storage medium
CN114064738A (en) * 2022-01-14 2022-02-18 杭州捷配信息科技有限公司 Electronic component substitute material searching method and device and application
CN114064738B (en) * 2022-01-14 2022-04-29 杭州捷配信息科技有限公司 Electronic component substitute material searching method and device and application
CN114972883A (en) * 2022-06-17 2022-08-30 平安科技(深圳)有限公司 Target detection sample generation method based on artificial intelligence and related equipment
CN114972883B (en) * 2022-06-17 2024-05-10 平安科技(深圳)有限公司 Target detection sample generation method based on artificial intelligence and related equipment

Also Published As

Publication number Publication date
CN111626371B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN111626371B (en) Image classification method, device, equipment and readable storage medium
US10402627B2 (en) Method and apparatus for determining identity identifier of face in face image, and terminal
CN109117803B (en) Face image clustering method and device, server and storage medium
US8792722B2 (en) Hand gesture detection
US8750573B2 (en) Hand gesture detection
WO2019033525A1 (en) Au feature recognition method, device and storage medium
WO2016149944A1 (en) Face recognition method and system, and computer program product
CN109858375B (en) Living body face detection method, terminal and computer readable storage medium
US11126827B2 (en) Method and system for image identification
WO2020244071A1 (en) Neural network-based gesture recognition method and apparatus, storage medium, and device
CN109376604B (en) Age identification method and device based on human body posture
CN110569731A (en) face recognition method and device and electronic equipment
CN110741377A (en) Face image processing method and device, storage medium and electronic equipment
JP7089045B2 (en) Media processing methods, related equipment and computer programs
US10423817B2 (en) Latent fingerprint ridge flow map improvement
CN109934077B (en) Image identification method and electronic equipment
CN111626240B (en) Face image recognition method, device and equipment and readable storage medium
WO2023273616A1 (en) Image recognition method and apparatus, electronic device, storage medium
CN109711287B (en) Face acquisition method and related product
Lahiani et al. Hand pose estimation system based on Viola-Jones algorithm for android devices
CN110688872A (en) Lip-based person identification method, device, program, medium, and electronic apparatus
CN110659631A (en) License plate recognition method and terminal equipment
CN117237757A (en) Face recognition model training method and device, electronic equipment and medium
CN115457595A (en) Method for associating human face with human body, electronic device and storage medium
CN114067394A (en) Face living body detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant