CN111488919B - Target recognition method and device, electronic equipment and computer readable storage medium - Google Patents
Target recognition method and device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN111488919B CN111488919B CN202010212063.9A CN202010212063A CN111488919B CN 111488919 B CN111488919 B CN 111488919B CN 202010212063 A CN202010212063 A CN 202010212063A CN 111488919 B CN111488919 B CN 111488919B
- Authority
- CN
- China
- Prior art keywords
- similarity
- target
- threshold
- target data
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000012545 processing Methods 0.000 claims description 16
- 238000000605 extraction Methods 0.000 claims description 10
- 238000010586 diagram Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The disclosure provides a target identification method and a device, wherein the target identification method comprises the following steps: acquiring target data to be identified; extracting characteristics of the target data through a target identification model to obtain target characteristics of the target data; comparing the target features with each bottom library feature in the bottom library respectively to obtain the similarity of the bottom library; confirming the first similarity and the most similar bottom library feature corresponding to the first similarity; if the first similarity is smaller than the first threshold and larger than the second threshold, comparing the target features with each interference feature in the interference library to obtain interference similarity; confirming a second similarity; and determining the identification result based on the first similarity and the second similarity. And comparing the target characteristics with the similarity in a to-be-determined range obtained by comparison with the bottom library, and judging according to the comparison similarity with the interference library, so that the condition of missing identification or false identification is avoided, and the accuracy rate of target identification is improved.
Description
Technical Field
The present disclosure relates generally to the field of intelligent recognition, and more particularly, to a target recognition method, a target recognition apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of society and technology, objects need to be identified in many scenes, and further operations are performed according to the identification result. For example, according to the recognition results of the target and the base, judging whether the target is an object in the base, and in white list recognition, performing operations such as release, unlocking and the like according to the results; in the blacklist identification, operations such as alarming and the like can be performed according to the identification result.
In the identification process, whether one target is the target existing in the base is generally judged by comparing the feature comparison result with the threshold value, and when the feature comparison result is near the threshold value, the final judgment result may deviate, so that inaccurate identification results such as false identification or missing identification are caused.
Disclosure of Invention
In order to solve the above-mentioned problems existing in the prior art, a first aspect of the present disclosure provides a target recognition method, wherein the method includes: acquiring target data to be identified; extracting characteristics of the target data through a target identification model to obtain target characteristics of the target data; comparing the target features with each bottom library feature in the bottom library respectively to obtain the similarity of the bottom library; confirming the first similarity and the most similar bottom library feature corresponding to the first similarity, wherein the first similarity is the maximum value in the bottom library similarity; if the first similarity is smaller than the first threshold and larger than the second threshold, comparing the target features with each interference feature in the interference library to obtain interference similarity, wherein the interference library comprises a plurality of interference features which are the same as the type of the features of the base library and the corresponding targets are different; confirming a second similarity, wherein the second similarity is the maximum value of the interference similarities; and determining a recognition result of the target data based on the first similarity and the second similarity.
In one example, determining the recognition result of the target data based on the first similarity and the second similarity includes: if the second similarity is greater than a third threshold, the identification result of the target data is that the target data is not hit; and if the second similarity is smaller than or equal to the third threshold value, obtaining the identification result of the target data according to the characteristics of the most similar base.
In one example, determining the recognition result of the target data based on the first similarity and the second similarity includes: obtaining a similarity difference value between the first similarity and the second similarity; and determining a recognition result of the target data based on the similarity difference and the first similarity.
In one example, determining the recognition result of the target data based on the similarity difference and the first similarity includes: if the similarity difference value is larger than a fourth threshold value, obtaining a recognition result of the target data according to the characteristics of the most similar base; if the similarity difference is smaller than or equal to the fourth threshold value, the recognition result of the target data is that the target data is not hit.
In one example, determining the recognition result of the target data based on the similarity difference and the first similarity includes: if the first similarity is greater than a fifth threshold and smaller than the first threshold, wherein the fifth threshold is smaller than the first threshold and greater than the second threshold, judging whether the similarity difference is smaller than a sixth threshold: if the similarity difference value is smaller than the sixth threshold value, the identification result of the target data is that the identification is not hit; if the similarity difference value is greater than or equal to a sixth threshold value, obtaining a recognition result of the target data according to the characteristics of the most similar base; if the first similarity is smaller than or equal to the fifth threshold and larger than the second threshold, judging whether the similarity difference is smaller than the seventh threshold: if the similarity difference value is smaller than a seventh threshold value, the identification result of the target data is that the identification is not hit; and if the similarity difference value is greater than or equal to a seventh threshold value, obtaining a recognition result of the target data according to the characteristics of the most similar base.
In one example, the method further comprises: and obtaining the features of the bottom library, wherein the features of the bottom library are obtained by extracting features of the bottom library data in the bottom library through the target identification model.
In one example, the method further comprises: and obtaining interference features, wherein the interference features are obtained by extracting features of interference data in an interference library through a target identification model.
In one example, the similarity of any of the interference features to any of the bottom library features is less than an eighth threshold.
In one example, the similarity between any two interference features is less than a ninth threshold.
In one example, the method further comprises: if the first similarity is greater than or equal to a first threshold value, the target data obtains a recognition result according to the characteristics of the most similar base; if the first similarity is smaller than or equal to the second threshold value, the identification result of the target data is that the target data is not hit.
A second aspect of the present disclosure provides an object recognition apparatus, the apparatus comprising: the acquisition module is used for acquiring target data to be identified; the feature extraction module is used for extracting features of the target data through the target recognition model to obtain target features of the target data; the comparison module is used for comparing the target features with each base feature in the base respectively to obtain the base similarity; the first confirming module is used for confirming the first similarity and the most similar bottom library feature corresponding to the first similarity, wherein the first similarity is the maximum value in the bottom library similarity; when the first similarity is smaller than a first threshold and larger than a second threshold, comparing the target features with each interference feature in an interference library through a comparison module to obtain interference similarity, wherein the interference library comprises a plurality of interference features which are features with the same type as the features of the base library and different corresponding targets; the second confirming module is used for confirming second similarity, wherein the second similarity is the maximum value in interference similarity; and the processing module is used for determining the identification result of the target data based on the first similarity and the second similarity.
A third aspect of the present disclosure provides an electronic device, comprising: a memory for storing instructions; and a processor for invoking the memory-stored instructions to perform the object recognition method as in the first aspect.
A fourth aspect of the present disclosure provides a computer-readable storage medium having stored therein instructions which, when executed by a processor, perform the object recognition method as in the first aspect.
According to the target identification method, the target identification device, the electronic equipment and the computer readable storage medium, the interference library is arranged to compare the target characteristics with the similarity in a to-be-determined range obtained by comparison with the base library, and then the interference library is used for comparison, and the target characteristics are judged according to the comparison similarity with the interference library, so that the condition of missing identification or identification errors is avoided, and the accuracy of target identification is improved.
Drawings
The above, as well as additional purposes, features, and advantages of embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
FIG. 1 shows a flow diagram of a target recognition method according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow diagram of a target recognition method according to another embodiment of the present disclosure;
FIG. 3 illustrates a flow diagram of a target recognition method according to another embodiment of the present disclosure;
FIG. 4 shows a flow diagram of a target recognition method according to another embodiment of the present disclosure;
FIG. 5 illustrates a flow diagram of a method of object recognition according to another embodiment of the present disclosure;
fig. 6 shows a schematic diagram of an object recognition device according to an embodiment of the present disclosure.
Fig. 7 is a schematic diagram of an electronic device according to an embodiment of the disclosure.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present disclosure will be described below with reference to several exemplary embodiments. It should be understood that these embodiments are presented merely to enable one skilled in the art to better understand and practice the present disclosure and are not intended to limit the scope of the present disclosure in any way.
It should be noted that, although the terms "first", "second", etc. are used herein to describe various modules, steps, data, etc. of the embodiments of the present disclosure, the terms "first", "second", etc. are merely for distinguishing between different modules, steps, data, etc. and not to indicate a particular order or importance. Indeed, the expressions "first", "second", etc. may be used entirely interchangeably.
In some related technologies, feature extraction is performed on target data to be identified, and the feature extraction is performed on the target data to be identified and the target data is compared with features in a base to obtain similarity, if the highest similarity value is larger than a threshold value, successful identification is judged, namely the target is identical to the base feature corresponding to the highest similarity, otherwise, missing is identified, namely the target does not exist in the base. In some cases, the highest similarity is located near the threshold, such as a case where the threshold is only exceeded rarely, and the case is determined to be successful in recognition, but there may be a case of misrecognition in reality, that is, the target is not present in the base; or the highest similarity is only smaller than the threshold value, and the situation is judged as the miss, but there may be a situation of missing recognition, namely, the object corresponding to the highest similarity in the bottom library is actually the object.
The above problems may be caused by quality fluctuations of the target data itself, or by data quality in the base, or by other factors during the comparison process. Such as the influence of light and angles in a photographed image, or the influence of the size of an object in the image or the direction of a human face, or the influence of irrelevant contents in text on the overall characteristics, etc., may lead to some cases, when the object characteristics of the object to be identified are compared between the features of the base and the interference features, the similarity value may have overall fluctuation due to the quality of the object data.
In order to solve the above-mentioned problems and provide a way to obtain a more accurate and reliable recognition result, fig. 1 shows a target recognition method 10 provided in an embodiment of the present disclosure, which may be used for image recognition, such as recognition of a target person based on a face image, and may also be used for recognition of a target of text or other data types, where the target to be recognized belongs to a target corresponding to a feature of a base closest to the feature of the target in the base, and in some classification recognition fields, the target to be recognized may also be considered as belonging to the same category as the target corresponding to the feature of the base closest to the target. And can be used for white list system also can be used for blacklist system, the mode that this disclosure provided can both improve the rate of accuracy, simultaneously, because blacklist system more needs to guarantee to reduce and leaks discernment, to the condition of misidentification acceptable to a certain extent, therefore this disclosure target identification method 10 is applicable to white list system relatively more. As shown in fig. 1, the target recognition method 10 may include: step S11 to step S17, the following details are given respectively:
step S11, target data to be identified are acquired.
According to different objects to be identified and different application scenes, the target data to be identified can be a human face image, a human body image, data such as sound and text. The acquisition mode can be real-time acquisition, or can be received from a cloud or other devices, or can be used for identifying data stored locally.
And step S12, extracting the characteristics of the target data through the target recognition model to obtain the target characteristics of the target data.
The object recognition model may be used to perform feature extraction on object data, such as convolutional neural networks (Convolutional Neural Networks, CNN) that perform feature extraction on face images, or bag-of-word models (bag of words model) that perform feature extraction on text, etc. The obtained target features can be used for comparing with the features of the bottom library in the bottom library, so that the similarity is compared, and a recognition result is obtained based on the similarity.
And S13, comparing the target features with each base feature in the base to obtain the base similarity.
And comparing the obtained target features with each base feature in the base to obtain the base similarity corresponding to each base feature respectively. The similarity of the present disclosure is a measurement value reflecting the similarity degree of two features, and the higher the similarity, the more similar the two features are. Meanwhile, in some embodiments, feature comparison may adopt vector comparison and represent the similarity degree of the two features in a manner of vector distance, that is, the smaller the vector distance is, the more similar the two features are, the larger the vector distance is, the more different the two features are, the principle of the manner of similarity is the same, and the transformation can be performed by establishing a mapping relationship between the vector distance and the similarity.
In one embodiment, as shown in fig. 2, the target recognition method 10 may further include: step S131, obtaining the features of the bottom library, wherein the features of the bottom library are obtained by extracting features of the bottom library data in the bottom library through the target identification model. In this embodiment, the features of the base in the base may be pre-recorded, the base is built according to the target to be matched, the data is extracted by the target recognition model, the extracted features are stored as the features of the base, for example, in the face recognition field, the base data may be face images of the target objects, the face feature data of the plurality of target object face images are extracted by the target recognition model, and stored, and the base data is used for comparing and judging whether the target is the target object existing in the base when the target recognition is performed. The target recognition model of the features of the input base and the target recognition model applied in the actual target recognition process are the same model, so that the feature extraction mode is identical, and the result deviation caused by different models is avoided.
Step S14, confirming the first similarity and the most similar bottom library feature corresponding to the first similarity, wherein the first similarity is the maximum value in the bottom library similarity.
And comparing the target feature with each base feature to obtain the base similarity corresponding to each base feature, and confirming the base similarity with the largest similarity value as the first similarity based on the base similarity, namely, the base feature which is the most similar to the base feature and corresponds to the first similarity, namely, the base feature which is the closest to the target feature in the base.
And S15, if the first similarity is smaller than the first threshold and larger than the second threshold, comparing the target features with each interference feature in the interference library to obtain interference similarity, wherein the interference library comprises a plurality of interference features which are the features with the same type as the features of the base library and the corresponding targets are different.
In the present disclosure, a first threshold and a second threshold are provided, wherein the first threshold is greater than the second threshold. A numerical interval is formed between the first threshold value and the second threshold value, and under the condition that the first similarity is smaller than the first threshold value and larger than the second threshold value, certain uncertainty may exist in the identification of the target, and further judgment is needed to increase the accuracy of the identification. The values of the first threshold and the second threshold may be set according to the value of the original single threshold, for example, in some related schemes, the similarity value is 0-100, in the case that the original single threshold is 60, that is, the first similarity exceeds 60, that is, it is determined that the object and the feature of the base corresponding to the first similarity are the same object, in this case, if the first similarity is near 60, the foregoing situation of missing identification or misidentification may occur, in this scheme, the first threshold may be set to 70, the second threshold may be set to 50, and if the first similarity is within the interval of 50-70, the subsequent judgment needs to be performed through the interference base, so as to improve the accuracy of identification. The interference library comprises a plurality of interference features, the types of the interference features are the same as those of the base library, the interference features are all features extracted through the same target recognition model, but the difference is that target objects to be recognized are different, for example, the targets in the target base library are human features obtained by recognizing human body images in the set A through the target recognition model, the interference library is human body features obtained by recognizing human body images in the set B through the same target recognition model, and no intersection exists between the set A and the set B.
In one embodiment, as shown in fig. 3, the target recognition method 10 may further include: step S18, if the first similarity is greater than or equal to a first threshold value, the target data obtains a recognition result according to the characteristics of the most similar base; in step S19, if the first similarity is less than or equal to the second threshold, the recognition result of the target data is a recognition miss. In this embodiment, in a case where the first similarity is greater than or equal to the first threshold, that is, in a case where the similarity between the target feature and the most similar feature of the base is very high, the target to be identified and the target corresponding to the most similar feature of the base may be considered to be the same target, and the identification result of the target to be detected may be obtained based on the target to be identified, for example, in a face identification scenario, when the similarity exceeds the first threshold, the face feature of the target to be identified is considered to be the person in the base, so that a subsequent corresponding operation may be performed, such as unlocking in a whitelist system, or alarming in a blacklist system. On the other hand, in the case where the first similarity is smaller than or equal to the second threshold, that is, in the case where the degree of similarity of the target feature and the most similar base feature is low, it can be considered that the target to be recognized is not the target corresponding to any feature in the base, and therefore, the recognition result of the recognition miss can be obtained based on this. For example, taking a face recognition scenario as an example, if the similarity between the face features of the object to be recognized and the face features most similar to the face features in the base is still lower than the second threshold, it may be determined that the object to be recognized does not belong to the object in the base, so that subsequent corresponding operations, such as prohibiting passing in a whitelist system, not unlocking, or not alarming in a blacklist system, may be performed.
In one embodiment, as shown in fig. 4, the target recognition method 10 may further include: step S151, obtaining interference features, wherein the interference features are obtained by extracting features from interference data in an interference library through a target identification model. In this embodiment, the interference features in the interference library may be pre-recorded, or target features of the target to be identified, which do not hit the target in the base library, may be recorded as alternative interference features in the actual target identification process. The data types of the interference features are consistent with the data types of the targets and the data types of the input bottom library, the number of the interference features can be close to the number of the bottom library features in the bottom library, but the targets in the interference library are required to be ensured to be different from the targets in the bottom library, namely the targets in the bottom library cannot be simultaneously present in the interference library, so that judgment errors are avoided. And extracting features of the interference data through the same target recognition model as the target recognition model and the input base, and storing the extracted features as interference features. For example, taking the face recognition scene as an example, a plurality of face images can be additionally acquired outside the base, namely, interference data can be obtained, and face feature data can be extracted through the target recognition model and stored. The interference features obtained in this way can ensure the accuracy of identifying the target by the target identification method 10 of the present disclosure.
Step S16, confirming a second similarity, wherein the second similarity is the maximum value in the interference similarity.
By the same way, after the target feature and each interference feature are respectively compared to obtain the interference similarity, the second similarity with the highest similarity is confirmed.
Step S17, determining the identification result of the target data based on the first similarity and the second similarity.
Since the objects in the interference library are not in the base library, if the object features of the object to be identified are very similar to the interference features, the object to be identified may be indicated to be closer to a non-base library object, and therefore, in the case that the first similarity is smaller than the first threshold and larger than the second threshold, the object to be identified may not be an object in the base library, that is, should be identified as a miss. Conversely, if each of the interference targets in the interference library is not similar, i.e., the value of the second similarity is low, the confidence level that indicates the first similarity to the most similar of the bottom library features may be relatively high. Therefore, under the condition that the first similarity is smaller than the first threshold value and larger than the second threshold value, the target recognition result can be obtained more accurately and reliably according to the first similarity obtained by comparing the target features with the features of the base and the second similarity obtained by comparing the target features with the interference features.
In an embodiment, step S17 may further include, if the second similarity is greater than the third threshold, identifying a miss as a result of identifying the target data; and if the second similarity is smaller than or equal to the third threshold value, obtaining the identification result of the target data according to the characteristics of the most similar base.
In this embodiment, it is determined whether the second similarity is greater than a preset third threshold, if so, it is indicated that the target feature is similar to the interference feature of a target in the interference library, and the first similarity is smaller than the first threshold, so that the likelihood that the target feature and the most similar bottom library feature in the bottom library do not belong to a target is greater, and therefore, in this case, the recognition result of the target data is a miss. On the other hand, if the second similarity is smaller than or equal to the third threshold, it indicates that the target feature is not similar to the interference feature in the interference library, and the first similarity is higher than the second threshold, so that it can be determined that the target feature and the most similar bottom library feature in the bottom library are the same target or belong to the same class. By setting the third threshold value, the accuracy of identification can be conveniently improved.
In other embodiments, as shown in fig. 5, step S17 may further include: step S171, obtaining a similarity difference value between the first similarity and the second similarity; in step S172, the recognition result of the target data is determined based on the similarity difference and the first similarity.
Compared with the previous embodiment, based on the similarity difference between the first similarity and the second similarity, that is, the calculation result obtained by subtracting the value of the second similarity from the value of the first similarity, the determination can be more accurately performed, so that the result deviation caused by the overall floating of the absolute magnitude of the similarity in some cases is avoided.
In an embodiment, step S172 may include: if the similarity difference value is larger than a fourth threshold value, obtaining a recognition result of the target data according to the characteristics of the most similar base; if the similarity difference is smaller than or equal to the fourth threshold value, the recognition result of the target data is that the target data is not hit.
In this embodiment, a fourth threshold is preset, and the fourth threshold is used to determine a similarity difference between the first similarity and the second similarity. If the similarity difference is greater than the fourth threshold, the first similarity is higher than the second similarity, so that the possibility that the target feature and the most similar bottom library feature in the bottom library belong to the same target or the same category can be judged to be higher, and the recognition result of the target data is obtained. On the other hand, when the similarity difference is smaller than or equal to the fourth threshold value, it means that the first similarity is close to the second similarity, or the second similarity is even higher than the first similarity, in which case it may be determined that the reliability of matching the target feature with the most similar bottom library feature of the bottom library is not high, and therefore, it may be determined that the recognition result of the target data is a recognition miss. In the embodiment, the judgment is performed through the similarity difference value between the first similarity and the second similarity, so that the judgment result is more accurate and reliable, and the accuracy of the target identification result is further ensured.
In an embodiment, step S172 may further include: if the first similarity is greater than a fifth threshold and smaller than the first threshold, wherein the fifth threshold is smaller than the first threshold and greater than the second threshold, judging whether the similarity difference is smaller than a sixth threshold: if the similarity difference value is smaller than the sixth threshold value, the identification result of the target data is that the identification is not hit; if the similarity difference value is greater than or equal to a sixth threshold value, obtaining a recognition result of the target data according to the characteristics of the most similar base; if the first similarity is smaller than or equal to the fifth threshold and larger than the second threshold, judging whether the similarity difference is smaller than the seventh threshold: if the similarity difference value is smaller than a seventh threshold value, the identification result of the target data is that the identification is not hit; and if the similarity difference value is greater than or equal to a seventh threshold value, obtaining a recognition result of the target data according to the characteristics of the most similar base.
In this embodiment, a fifth threshold is preset, and the fifth threshold is smaller than the first threshold and larger than the second threshold, as in the foregoing specific example, the fifth threshold may be an original threshold of whether the original determination hits or not in some related technologies, for example, the original similarity threshold is 60, the first threshold is set to 70, the second threshold is set to 50, and in this embodiment, the fifth threshold may be set to 60. The fifth threshold may be set to determine whether the first similarity is biased toward a hit or a miss in general, with emphasis on whether the first similarity is biased toward a hit or not, and with emphasis on whether the first similarity is biased toward a miss or not.
In one aspect, when the first similarity is greater than the fifth threshold and less than the first threshold, it is further determined whether the similarity difference is less than the sixth threshold, so as to detect whether there is a false recognition. The principle of setting the sixth threshold is similar to that of the fourth threshold in the foregoing embodiment, except that the fourth threshold is not set separately, and the sixth threshold is used only for judging whether or not there is a possibility of misrecognition in the case where the first similarity is greater than the fifth threshold and smaller than the first threshold, and therefore, the value of the sixth threshold may be slightly lower than the fourth threshold. In the case that the similarity difference is smaller than the sixth threshold, that is, the first similarity is close to the second similarity, or the second similarity is even higher than the first similarity, in this case, the most similar feature of the bottom library is more likely to be misidentified, and therefore, the identification result of the target data is determined to be an identification miss. And under the condition that the similarity difference value is larger than or equal to the sixth threshold value, the first similarity is higher than the second similarity, so that the possibility that the target feature and the most similar bottom library feature in the bottom library belong to the same target or the same category can be judged to be higher, and the recognition result of the target data is obtained. On the other hand, if the first similarity is smaller than or equal to the fifth threshold and larger than the second threshold, further judging whether the difference value of the four degrees of western security is smaller than the seventh threshold or not so as to detect whether the missing identification exists or not. The principle of setting the seventh threshold is also similar to that of the fourth threshold and the sixth threshold, and similarly, since the seventh threshold is used only for judging whether or not there is a possibility of missing recognition in the case where the first similarity is smaller than or equal to the fifth threshold and larger than the second threshold, the value of the seventh threshold may be slightly higher than the fourth threshold and higher than the sixth threshold. If the similarity difference is smaller than the seventh threshold, the recognition result of the target data is a recognition miss, and if the similarity difference is greater than or equal to the seventh threshold, the recognition result of the target data can be obtained according to the features of the bottom library, and the judgment principle is the same as that of the previous embodiment, and is not repeated here.
The embodiment further judges the situation by the situation, so that the accuracy of the final target recognition result can be ensured.
In an embodiment, the similarity of any interference feature to any bottom library feature is less than an eighth threshold. If the interference feature X is too close to the bottom library feature Y, if a certain target feature is most similar to the bottom library feature Y, the target feature will be very similar to the interference feature X in the interference library when the determination is made by the manner of steps S15 to S17 in any of the foregoing embodiments, so that the value of the second similarity will be very high, thereby adversely affecting the final determination result. Therefore, in this embodiment, in order to further improve the accuracy of the determination, when inputting the interference features, it is required to ensure that the similarity between any one of the interference features and any one of the bottom library features is smaller than the eighth threshold, that is, to avoid that the interference feature is too close to one or more of the bottom library features.
In an embodiment, the similarity between any two interference features is less than a ninth threshold. In this embodiment, in order to ensure that features in the interference library are various, and avoid bad effects caused by too close, a ninth threshold may be preset, where the ninth threshold may be equal to or unequal to the eighth threshold, and when the interference library is recorded, the ninth threshold is used to ensure that the similarity between any two interference features is smaller than the ninth threshold, and if the similarity between the interference feature to be put in and a certain in-put interference feature is greater than or equal to the ninth threshold, the interference feature to be put in is discarded, so that the divergence of the interference feature in the interference library is ensured, and the effect of the target recognition method 10 of the present disclosure is improved.
Based on the same inventive concept, the present disclosure further provides an object recognition apparatus 100, as shown in fig. 6, the face image screening apparatus 100 includes: an acquisition module 110, configured to acquire target data to be identified; the feature extraction module 120 is configured to perform feature extraction on the target data through the target recognition model, so as to obtain target features of the target data; the comparison module 130 is configured to compare the target feature with each of the features of the bottom library to obtain a bottom library similarity; the first confirming module 140 is configured to confirm the first similarity and the most similar bottom library feature corresponding to the first similarity, where the first similarity is the maximum value of the bottom library similarities; when the first similarity is smaller than the first threshold and larger than the second threshold, comparing the target features with each interference feature in the interference library respectively through the comparison module 130 to obtain interference similarity, wherein the interference library comprises a plurality of interference features, and the interference features are features which are the same as the types of the features of the base library and are different from the corresponding targets; a second confirming module 150 for confirming a second similarity, wherein the second similarity is a maximum value of the interference similarities; the processing module 160 is configured to determine a recognition result of the target data based on the first similarity and the second similarity.
In one example, the processing module 160 is configured to: if the second similarity is greater than a third threshold, the identification result of the target data is that the target data is not hit; and if the second similarity is smaller than or equal to the third threshold value, obtaining the identification result of the target data according to the characteristics of the most similar base.
In one example, the processing module 160 is configured to: obtaining a similarity difference value between the first similarity and the second similarity; and determining a recognition result of the target data based on the similarity difference and the first similarity.
In one example, the processing module 160 is further configured to: if the similarity difference value is larger than a fourth threshold value, obtaining a recognition result of the target data according to the characteristics of the most similar base; if the similarity difference is smaller than or equal to the fourth threshold value, the recognition result of the target data is that the target data is not hit.
In one example, the processing module 160 is further configured to: if the first similarity is greater than a fifth threshold and smaller than the first threshold, wherein the fifth threshold is smaller than the first threshold and greater than the second threshold, judging whether the similarity difference is smaller than a sixth threshold: if the similarity difference value is smaller than the sixth threshold value, the identification result of the target data is that the identification is not hit; if the similarity difference value is greater than or equal to a sixth threshold value, obtaining a recognition result of the target data according to the characteristics of the most similar base; if the first similarity is smaller than or equal to the fifth threshold and larger than the second threshold, judging whether the similarity difference is smaller than the seventh threshold: if the similarity difference value is smaller than a seventh threshold value, the identification result of the target data is that the identification is not hit; and if the similarity difference value is greater than or equal to a seventh threshold value, obtaining a recognition result of the target data according to the characteristics of the most similar base.
In one example, the acquisition module 110 is further configured to: and obtaining the features of the bottom library, wherein the features of the bottom library are obtained by extracting features of the bottom library data in the bottom library through the target identification model.
In one example, the acquisition module 110 is further configured to: and obtaining interference features, wherein the interference features are obtained by extracting features of interference data in an interference library through a target identification model.
In one example, the similarity of any of the interference features to any of the bottom library features is less than an eighth threshold.
In one example, the similarity between any two interference features is less than a ninth threshold.
In one example, the processing module 160 is further configured to: if the first similarity is greater than or equal to a first threshold value, the target data obtains a recognition result according to the characteristics of the most similar base; if the first similarity is smaller than or equal to the second threshold value, the identification result of the target data is that the target data is not hit.
With respect to the object recognition apparatus 100 in the above-described embodiment, the specific manner in which the respective modules perform the operations has been described in detail in the embodiment regarding the method, and will not be described in detail here.
As shown in fig. 7, one embodiment of the present disclosure provides an electronic device 200. The electronic device 200 includes, among other things, a memory 201, a processor 202, and an Input/Output (I/O) interface 203. Wherein the memory 201 is used for storing instructions. A processor 202 for invoking instructions stored in memory 201 to perform the object recognition method of the disclosed embodiments. Wherein the processor 202 is coupled to the memory 201, the I/O interface 203, respectively, such as via a bus system and/or other form of connection mechanism (not shown). The memory 201 may be used to store programs and data, including programs of the object recognition method involved in the embodiments of the present disclosure, and the processor 202 performs various functional applications of the electronic device 200 and data processing by running the programs stored in the memory 201.
The processor 202 in the disclosed embodiments may be implemented in at least one hardware form of a digital signal processor (Digital Signal Processing, DSP), field-programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA), the processor 202 may be one or a combination of several of a central processing unit (Central Processing Unit, CPU) or other form of processing unit having data processing and/or instruction execution capabilities.
The memory 201 in embodiments of the present disclosure may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random access memory (Random Access Memory, RAM) and/or cache memory (cache), etc. The nonvolatile Memory may include, for example, a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a Hard Disk (HDD), a Solid State Drive (SSD), or the like.
In the embodiment of the present disclosure, the I/O interface 203 may be used to receive an input instruction (e.g., numeric or character information, and generate key signal input related to user setting and function control of the electronic apparatus 200, etc.), and may also output various information (e.g., image or sound, etc.) to the outside. The I/O interface 203 in embodiments of the present disclosure may include one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a mouse, joystick, trackball, microphone, speaker, touch panel, etc.
It will be appreciated that although operations are described in a particular order in the figures, this should not be construed as requiring that these operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous.
Methods and apparatus related to embodiments of the present disclosure can be accomplished using standard programming techniques with rule-based logic or other logic to accomplish the various method steps. It should also be noted that the words "apparatus" and "module" as used herein and in the claims are intended to include implementations using one or more lines of software code and/or hardware implementations and/or equipment for receiving inputs.
Any of the steps, operations, or procedures described herein may be performed or implemented using one or more hardware or software modules alone or in combination with other devices. In one embodiment, the software modules are implemented using a computer program product comprising a computer readable medium containing computer program code capable of being executed by a computer processor for performing any or all of the described steps, operations, or programs.
The foregoing description of implementations of the present disclosure has been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or to limit the disclosure to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosure. The embodiments were chosen and described in order to explain the principles of the present disclosure and its practical application to enable one skilled in the art to utilize the present disclosure in various embodiments and with various modifications as are suited to the particular use contemplated.
Claims (12)
1. A method of target identification, wherein the method comprises:
acquiring target data to be identified;
extracting features of the target data through a target identification model to obtain target features of the target data;
comparing the target features with each bottom library feature in the bottom library to obtain the similarity of the bottom library;
confirming a first similarity and the most similar bottom library feature corresponding to the first similarity, wherein the first similarity is the maximum value in the bottom library similarity;
if the first similarity is smaller than a first threshold and larger than a second threshold, comparing the target features with each interference feature in an interference library to obtain interference similarity, wherein the interference library comprises a plurality of interference features which are the features with the same type as the features of the base library and the corresponding targets are different;
confirming a second similarity, wherein the second similarity is the maximum value of the interference similarities;
determining a recognition result of the target data based on the first similarity and the second similarity;
wherein the determining the recognition result of the target data based on the first similarity and the second similarity includes:
if the second similarity is greater than a third threshold, the identification result of the target data is that the target data is not hit;
and if the second similarity is smaller than or equal to the third threshold, obtaining the identification result of the target data according to the characteristics of the most similar base.
2. The method of claim 1, wherein the determining the recognition result of the target data based on the first similarity and the second similarity comprises:
obtaining a similarity difference value between the first similarity and the second similarity;
and determining the identification result of the target data based on the similarity difference and the first similarity.
3. The method of claim 2, wherein the determining the recognition result of the target data based on the similarity difference and the first similarity comprises:
if the similarity difference value is larger than a fourth threshold value, obtaining a recognition result of the target data according to the features of the bottom library;
and if the similarity difference value is smaller than or equal to the fourth threshold value, the identification result of the target data is that the target data is not hit.
4. The method of claim 2, wherein the determining the recognition result of the target data based on the similarity difference and the first similarity comprises:
if the first similarity is greater than a fifth threshold and less than the first threshold, wherein the fifth threshold is less than the first threshold and greater than the second threshold
Judging whether the similarity difference value is smaller than a sixth threshold value: if the similarity difference value is smaller than the sixth threshold value, the identification result of the target data is that the identification is not hit; if the similarity difference value is larger than or equal to the sixth threshold value, obtaining a recognition result of the target data according to the characteristics of the most similar base;
if the first similarity is less than or equal to the fifth threshold and greater than the second threshold
Judging whether the similarity difference value is smaller than a seventh threshold value: if the similarity difference value is smaller than the seventh threshold value, the identification result of the target data is that the identification is not hit; and if the similarity difference value is larger than or equal to the seventh threshold value, obtaining the identification result of the target data according to the characteristics of the most similar base.
5. The method of claim 1, wherein the method further comprises:
and acquiring the characteristics of the bottom library, wherein the characteristics of the bottom library are obtained by extracting characteristics of bottom library data in the bottom library through the target identification model.
6. The method of claim 5, wherein the method further comprises:
and obtaining the interference features, wherein the interference features are obtained by extracting features of interference data in the interference library through the target identification model.
7. The method of claim 6, wherein a similarity of any of the interference features to any of the bottom library features is less than an eighth threshold.
8. The method of claim 7, wherein a similarity between any two of the interference features is less than a ninth threshold.
9. The method of claim 1, wherein the method further comprises:
if the first similarity is greater than or equal to the first threshold, the target data obtains an identification result according to the most similar bottom library characteristics;
and if the first similarity is smaller than or equal to the second threshold value, the identification result of the target data is that the target data is not hit.
10. An object recognition apparatus, wherein the apparatus comprises:
the acquisition module is used for acquiring target data to be identified;
the feature extraction module is used for extracting features of the target data through a target identification model to obtain target features of the target data;
the comparison module is used for comparing the target features with each bottom library feature in the bottom library respectively to obtain the similarity of the bottom library;
the first confirming module is used for confirming first similarity and the most similar bottom library features corresponding to the first similarity, wherein the first similarity is the maximum value in the bottom library similarity;
when the first similarity is smaller than a first threshold value and larger than a second threshold value, the target features are respectively compared with each interference feature in an interference library through the comparison module, so that interference similarity is obtained;
a second confirming module, configured to confirm a second similarity, where the second similarity is a maximum value of the interference similarities;
the processing module is used for determining the identification result of the target data based on the first similarity and the second similarity;
the processing module is further configured to, if the second similarity is greater than a third threshold, identify a miss as a result of identifying the target data; and if the second similarity is smaller than or equal to the third threshold value, obtaining the identification result of the target data according to the characteristics of the most similar base.
11. An electronic device, wherein the electronic device comprises:
a memory for storing instructions; and
a processor for invoking the instructions stored in the memory to perform the object recognition method of any of claims 1-9.
12. A computer readable storage medium having stored therein instructions which, when executed by a processor, perform the object recognition method of any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010212063.9A CN111488919B (en) | 2020-03-24 | 2020-03-24 | Target recognition method and device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010212063.9A CN111488919B (en) | 2020-03-24 | 2020-03-24 | Target recognition method and device, electronic equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111488919A CN111488919A (en) | 2020-08-04 |
CN111488919B true CN111488919B (en) | 2023-12-22 |
Family
ID=71798241
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010212063.9A Active CN111488919B (en) | 2020-03-24 | 2020-03-24 | Target recognition method and device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111488919B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112631896B (en) * | 2020-12-02 | 2024-04-05 | 武汉旷视金智科技有限公司 | Equipment performance test method and device, storage medium and electronic equipment |
CN113282677A (en) * | 2020-12-09 | 2021-08-20 | 苏州律点信息科技有限公司 | Intelligent traffic data processing method, device and system based on big data |
CN113393145B (en) * | 2021-06-25 | 2023-06-30 | 广东利元亨智能装备股份有限公司 | Model similarity obtaining method and device, electronic equipment and storage medium |
CN114647826A (en) * | 2022-01-30 | 2022-06-21 | 北京旷视科技有限公司 | Identity verification method, electronic device, storage medium, and computer program product |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103310207A (en) * | 2013-07-06 | 2013-09-18 | 中国科学技术大学 | Moped detection method based on multiple Gaussian models |
CN104598900A (en) * | 2015-02-26 | 2015-05-06 | 张耀 | Human body recognition method and device |
CN106372572A (en) * | 2016-08-19 | 2017-02-01 | 北京旷视科技有限公司 | Monitoring method and apparatus |
CN110334688A (en) * | 2019-07-16 | 2019-10-15 | 重庆紫光华山智安科技有限公司 | Image-recognizing method, device and computer readable storage medium based on human face photo library |
CN110490026A (en) * | 2018-05-14 | 2019-11-22 | 阿里巴巴集团控股有限公司 | The methods, devices and systems of identifying object |
WO2020015075A1 (en) * | 2018-07-18 | 2020-01-23 | 平安科技(深圳)有限公司 | Facial image comparison method and apparatus, computer device, and storage medium |
WO2020038136A1 (en) * | 2018-08-24 | 2020-02-27 | 深圳前海达闼云端智能科技有限公司 | Facial recognition method and apparatus, electronic device and computer-readable medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101677559B1 (en) * | 2013-03-22 | 2016-11-18 | 한국전자통신연구원 | Image registration device and operation method thereof |
-
2020
- 2020-03-24 CN CN202010212063.9A patent/CN111488919B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103310207A (en) * | 2013-07-06 | 2013-09-18 | 中国科学技术大学 | Moped detection method based on multiple Gaussian models |
CN104598900A (en) * | 2015-02-26 | 2015-05-06 | 张耀 | Human body recognition method and device |
CN106372572A (en) * | 2016-08-19 | 2017-02-01 | 北京旷视科技有限公司 | Monitoring method and apparatus |
CN110490026A (en) * | 2018-05-14 | 2019-11-22 | 阿里巴巴集团控股有限公司 | The methods, devices and systems of identifying object |
WO2020015075A1 (en) * | 2018-07-18 | 2020-01-23 | 平安科技(深圳)有限公司 | Facial image comparison method and apparatus, computer device, and storage medium |
WO2020038136A1 (en) * | 2018-08-24 | 2020-02-27 | 深圳前海达闼云端智能科技有限公司 | Facial recognition method and apparatus, electronic device and computer-readable medium |
CN110334688A (en) * | 2019-07-16 | 2019-10-15 | 重庆紫光华山智安科技有限公司 | Image-recognizing method, device and computer readable storage medium based on human face photo library |
Non-Patent Citations (2)
Title |
---|
吴云龙 ; 邵立 ; 张恺 ; 李锋 ; 孙晓泉 ; .基于小波能量和光斑尺寸的干扰图像尺度分析.光子学报.(第07期),全文. * |
基于小波能量和光斑尺寸的干扰图像尺度分析;吴云龙;邵立;张恺;李锋;孙晓泉;;光子学报(第07期) * |
Also Published As
Publication number | Publication date |
---|---|
CN111488919A (en) | 2020-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111488919B (en) | Target recognition method and device, electronic equipment and computer readable storage medium | |
US9672409B2 (en) | Apparatus and computer-implemented method for fingerprint based authentication | |
US6738519B1 (en) | Character recognition apparatus | |
EP2907082B1 (en) | Using a probabilistic model for detecting an object in visual data | |
CN109740633B (en) | Image similarity calculation method and device and storage medium | |
CN111915437A (en) | RNN-based anti-money laundering model training method, device, equipment and medium | |
CN106372564A (en) | Gesture identification method and apparatus | |
CN109302410A (en) | A kind of internal user anomaly detection method, system and computer storage medium | |
CN110413815B (en) | Portrait clustering cleaning method and device | |
CN111738351A (en) | Model training method and device, storage medium and electronic equipment | |
CN111931548B (en) | Face recognition system, method for establishing face recognition data and face recognition method | |
CN112560971A (en) | Image classification method and system for active learning self-iteration | |
CN111783812A (en) | Method and device for identifying forbidden images and computer readable storage medium | |
CN112270204A (en) | Target identification method and device, storage medium and electronic equipment | |
CN111639517A (en) | Face image screening method and device | |
CN110175500B (en) | Finger vein comparison method, device, computer equipment and storage medium | |
CN109961103B (en) | Training method of feature extraction model, and image feature extraction method and device | |
CN109413595B (en) | Spam short message identification method, device and storage medium | |
JPWO2005069221A1 (en) | Pattern identification system, pattern identification method, and pattern identification program | |
CN111241314B (en) | Fingerprint base input method and device, electronic equipment and storage medium | |
Mondéjar-Guerra et al. | Keypoint descriptor fusion with Dempster–Shafer theory | |
Prasad et al. | Methods for ellipse detection from edge maps of real images | |
CN110084157B (en) | Data processing method and device for image re-recognition | |
KR20230156823A (en) | Fingerprint identification method, device, electronic apparatus and storage medium | |
CN109670520B (en) | Target posture recognition method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |