CN115578765A - Target identification method, device, system and computer readable storage medium - Google Patents

Target identification method, device, system and computer readable storage medium Download PDF

Info

Publication number
CN115578765A
CN115578765A CN202211126060.9A CN202211126060A CN115578765A CN 115578765 A CN115578765 A CN 115578765A CN 202211126060 A CN202211126060 A CN 202211126060A CN 115578765 A CN115578765 A CN 115578765A
Authority
CN
China
Prior art keywords
target
feature
comparison result
reference feature
storage space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211126060.9A
Other languages
Chinese (zh)
Inventor
彭文强
黄鹏
殷俊
张小锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211126060.9A priority Critical patent/CN115578765A/en
Publication of CN115578765A publication Critical patent/CN115578765A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/754Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries involving a deformation of the sample pattern or of the reference pattern; Elastic matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Abstract

The application discloses a target identification method, a device, a system and a computer readable storage medium. The method comprises the following steps: receiving target data to be identified sent by terminal equipment; the target data to be recognized comprises target characteristics to be recognized and target attributes; determining a target storage space corresponding to the target attribute from the plurality of storage spaces; at least one first reference feature corresponding to the attribute is stored in each storage space in advance; comparing the target features to be identified with at least one first reference feature in the target storage space to obtain at least one target comparison result; and determining a target identification result according to the at least one target comparison result, and sending the target identification result to the terminal equipment. Through the mode, the comparison efficiency can be improved.

Description

Target identification method, device, system and computer readable storage medium
Technical Field
The present application relates to the field of object recognition technologies, and in particular, to an object recognition method, apparatus, system, and computer-readable storage medium.
Background
Currently, a neural network model is generally used to extract facial features in an object recognition technology, such as face recognition, and then the facial features are compared, so as to realize face recognition. By using the human face feature extraction algorithm, human face comparison can be further carried out.
The disadvantage is that in the comparison process, the features to be compared and all reference features are generally used for comparison, and the output is the similarity between the two features.
Disclosure of Invention
The application provides a target identification method, a target identification device, a target identification system and a computer readable storage medium, which can improve comparison efficiency.
One technical solution adopted by the present application is to provide a target identification method, including: receiving target data to be identified sent by terminal equipment; the target data to be recognized comprises target features to be recognized and target attributes; determining a target storage space corresponding to the target attribute from the plurality of storage spaces; at least one first reference feature corresponding to the attribute is stored in each storage space in advance; comparing the target features to be identified with at least one first reference feature in the target storage space to obtain at least one target comparison result; and determining a target identification result according to the at least one target comparison result, and sending the target identification result to the terminal equipment.
The method for comparing the target features to be identified with at least one first reference feature in the target storage space to obtain at least one target comparison result includes: judging whether a second reference characteristic exists in the target storage space or not; wherein each second reference feature is associated with a first reference feature; if so, comparing the target feature to be identified with at least one first reference feature in the target storage space to obtain at least one first comparison result; comparing the target feature to be identified with the second reference feature to obtain at least one second comparison result; and obtaining at least one target comparison result according to the correlation between the at least one first comparison result and the at least one second comparison result.
Wherein, obtaining at least one target comparison result according to the correlation between the at least one first comparison result and the at least one second comparison result comprises: and carrying out weighted average on the first comparison result with the correlation and the first comparison result to obtain a target comparison result.
Wherein the target identification result comprises identification success or identification failure; after determining the target recognition result according to at least one target comparison result, the method comprises the following steps: if the target identification result is successful, determining the optimal target comparison result in at least one target comparison result; judging whether a first reference feature corresponding to the optimal target comparison result has a second reference feature which is associated with the first reference feature; if so, updating the first reference feature or the second reference feature by using the target feature to be identified; and if not, taking the target feature to be recognized as a second reference feature.
The updating of the first reference feature or the second reference feature by using the target feature to be recognized includes: comparing the first comparison result with the second comparison result; if the first comparison result is higher than the second comparison result, replacing the second reference feature with the target feature to be identified; and if the first comparison result is lower than the second comparison result, replacing the first reference feature with the target feature to be identified.
After determining the target identification result according to at least one target comparison result, the method further comprises the following steps: and if the target recognition result is recognition failure, storing the target feature to be recognized in the target storage space as a first reference feature.
Wherein, the method also comprises: and if the target storage spaces corresponding to the target attributes do not exist in the plurality of storage spaces, comparing the target features to be identified with at least one first reference feature in each storage space to obtain at least one target comparison result.
Before receiving target data to be identified sent by terminal equipment, the method comprises the following steps: receiving an image to be processed sent by terminal equipment; extracting the features of the image to be processed to obtain the features to be processed; wherein the feature to be processed is a 128-dimensional feature; and storing the to-be-processed features meeting the image quality as first reference features in the corresponding storage spaces.
Another technical solution adopted by the present application is to provide a target recognition apparatus, including: a processor; the memory is connected with the processor and used for storing the computer program, and the memory comprises a plurality of storage spaces, and each storage space is stored with at least one first reference characteristic corresponding to the attribute in advance; the communication module is connected with the processor and is used for communicating with the terminal equipment; the processor is used for executing the computer program to control the memory and the communication module to realize the method provided by the technical scheme.
Another technical solution adopted by the present application is to provide a target recognition system, including: a terminal device; and the target identification device is in communication connection with the terminal equipment, and the target identification device is the target identification device provided by the technical scheme.
Another technical solution adopted by the present application is to provide a computer-readable storage medium, which is used for storing a computer program, and when the computer program is executed by a processor, the computer program is used for implementing the method provided by the technical solution as described above.
The beneficial effect of this application is: different from the situation of the prior art, according to the target identification method, at least one first reference feature corresponding to the attribute is stored in the storage space in advance, so that when target data to be identified sent by the terminal device are received, the target storage space corresponding to the target attribute is determined from the plurality of storage spaces, the target feature to be identified in the target data to be identified is compared with the at least one first reference feature in the target storage space, and at least one target comparison result is obtained, so that the number of times of comparison between the target feature to be identified and the first reference feature is reduced, the comparison efficiency is improved, and the target identification result can be quickly obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram of a first embodiment of a target identification method provided in the present application;
FIG. 2 is a schematic diagram of an application scenario of the object recognition method provided in the present application;
FIG. 3 is a schematic flow chart diagram illustrating a second embodiment of a target identification method provided in the present application;
FIG. 4 is a schematic flow chart diagram illustrating a third embodiment of a target identification method provided in the present application;
FIG. 5 is a schematic flowchart of an embodiment of step 43 provided herein;
FIG. 6 is a schematic flowchart of a fourth embodiment of a target identification method provided in the present application;
FIG. 7 is a schematic block diagram of an embodiment of an object recognition device;
FIG. 8 is a schematic diagram of an embodiment of a target recognition system provided herein;
FIG. 9 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures associated with the present application are shown in the drawings, not all of them. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic flow chart of a first embodiment of a target identification method provided in the present application. The method comprises the following steps:
step 11: receiving target data to be identified sent by terminal equipment; the target data to be recognized comprises target features to be recognized and target attributes.
In some embodiments, the target data to be recognized may be generated based on a human face image, and the target data to be recognized may also be generated based on a human eye image.
Step 12: determining a target storage space corresponding to the target attribute from the plurality of storage spaces; at least one first reference feature corresponding to the attribute is stored in each storage space in advance.
In some embodiments, since the identification process substantially compares the target feature to be identified with a large number of first reference features, if the number of first reference features stored in the storage space is large, the time consumed for comparing one by one is long. Therefore, the first reference features are classified according to the corresponding attributes, so that the first reference features of different classes are stored in a storage space.
For example, the attribute may be age, that is, the first reference feature is divided into different age groups, and each age group corresponds to a storage space. For example, if 0-15 years are divided into a first age group, 15-35 years are divided into a second age group, 35-60 years are divided into a third age group, and 60 or more years are divided into a fourth age group, then there will be four storage spaces.
For example, the attribute may be age and whether the glasses are worn, that is, the first reference feature is divided into different age groups, and the division is performed to correspond to a storage space in each age group. For example, if 0-15 years are divided into a first age group, 15-35 years are divided into a second age group, 35-60 years are divided into a third age group, and more than 60 years are divided into a fourth age group, there will be four storage spaces. And then whether the first reference features in the storage space are divided into first reference features corresponding to the worn glasses and first reference features corresponding to the unworn glasses or not is judged in each storage space.
In some embodiments, each storage space may correspond to a database.
Step 13: and comparing the target features to be identified with at least one first reference feature in the target storage space to obtain at least one target comparison result.
In some embodiments, the comparison result may be a similarity between the target feature to be recognized and the first reference feature. Specifically, a traversal mode may be adopted to compare the target feature to be identified with each first reference feature in the target storage space, so as to obtain a corresponding similarity.
In some embodiments, if there is no target storage space corresponding to the target attribute in the plurality of storage spaces, the target feature to be identified is compared with the at least one first reference feature in each storage space to obtain at least one target comparison result.
Step 14: and determining a target identification result according to the at least one target comparison result, and sending the target identification result to the terminal equipment.
In some embodiments, a target comparison result with the highest similarity in the at least one target comparison result is determined, if the target comparison result with the highest similarity meets a preset condition, it is determined that the identification is successful, the identification success is taken as a target identification result, and the target identification result is sent to the terminal device.
And if the target comparison result with the highest similarity does not meet the preset condition, determining that the recognition fails, taking the recognition failure as a target recognition result, and sending the target recognition result to the terminal equipment.
In some embodiments, if the target identification result of successful identification is sent to the terminal device, the successful identification may be prompted on the terminal device. And sending the target identification result of the identification failure to the terminal equipment, and prompting the identification failure on the terminal equipment to acquire data again.
In an application scenario, the following description is made with reference to fig. 2:
and the terminal equipment collects a target image to be recognized and performs characteristic extraction on the target image to obtain target data to be recognized. And the terminal equipment sends the target data to be identified to the target identification device.
The target identification device receives target data to be identified sent by terminal equipment; the target data to be recognized comprises target features to be recognized and target attributes.
The target identification device determines a target storage space corresponding to the target attribute from a plurality of storage spaces; at least one first reference feature corresponding to the attribute is stored in each storage space in advance; comparing the target features to be identified with at least one first reference feature in the target storage space to obtain at least one target comparison result; and determining a target identification result according to at least one target comparison result.
And the target identification device sends the target identification result to the terminal equipment.
In this embodiment, at least one first reference feature corresponding to an attribute is pre-stored in a storage space, so that when target data to be recognized sent by a terminal device is received, a target storage space corresponding to the target attribute is determined from a plurality of storage spaces, and the target feature to be recognized in the target data to be recognized is compared with the at least one first reference feature in the target storage space to obtain at least one target comparison result, so that the number of times of comparing the target feature to be recognized with the first reference feature is reduced, the comparison efficiency is improved, and further, the target recognition result can be quickly obtained, so that the terminal device can quickly receive the target recognition result, and the user experience is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart of a second embodiment of the target identification method provided in the present application. The method comprises the following steps:
step 31: receiving target data to be identified sent by terminal equipment; the target data to be recognized comprises target features to be recognized and target attributes.
Step 32: determining a target storage space corresponding to the target attribute from the plurality of storage spaces; at least one first reference feature corresponding to the attribute is stored in each storage space in advance.
Step 31 and step 32 have the same or similar technical solutions as those of the above embodiments, and are not described herein again.
Step 33: judging whether a second reference characteristic exists in the target storage space or not; wherein each second reference feature is associated with one first reference feature.
Wherein, the second reference feature may be obtained when the target recognition result is that the recognition is successful. Specifically, a first reference feature corresponding to successful recognition is determined, and then the target feature to be recognized is taken as a second reference feature to be associated with the first reference feature corresponding to successful recognition and stored in the target storage space.
If yes, go to step 34, and if no, indicate that the target feature to be recognized does not have the corresponding first reference feature in the target storage space, or the original image of the target feature to be recognized does not meet the image quality requirement, so the target feature to be recognized does not have too much feature information, and thus the comparison cannot be performed.
Step 34: and comparing the target features to be identified with at least one first reference feature in the target storage space to obtain at least one first comparison result.
In some embodiments, the first comparison result may be a similarity of the target feature to be identified and the first reference feature. Specifically, a traversal mode may be adopted to compare the target feature to be identified with each first reference feature in the target storage space, so as to obtain a corresponding similarity.
Step 35: and comparing the target feature to be identified with the second reference feature to obtain at least one second comparison result.
In some embodiments, the second comparison result may be a similarity between the target feature to be recognized and the second reference feature. Specifically, the target feature to be identified may be compared with each second reference feature in the target storage space in a traversal manner, so as to obtain a corresponding similarity.
Step 36: and obtaining at least one target comparison result according to the correlation between the at least one first comparison result and the at least one second comparison result.
Each second reference feature is associated with one first reference feature, so that the first comparison result and the second comparison result have an association. Therefore, the associated first comparison result and the first comparison result need to be processed to obtain the corresponding target comparison result. For example, the first comparison result with correlation and the first comparison result are weighted and averaged to obtain a target comparison result. The weighted coefficients may all be 1, or the coefficients are divided according to the image quality of the second reference feature and the image quality of the first reference feature, and the coefficient with high image quality is higher and the coefficient with low image quality is lower.
And if the first reference feature has no associated second reference feature, directly taking a first comparison result corresponding to the first reference feature as a target comparison result.
Step 37: and determining a target identification result according to the at least one target comparison result, and sending the target identification result to the terminal equipment.
In some embodiments, a target comparison result with the highest similarity in the at least one target comparison result is determined, if the target comparison result with the highest similarity meets a preset condition, it is determined that the identification is successful, the identification success is taken as a target identification result, and the target identification result is sent to the terminal device.
And if the target comparison result with the highest similarity does not meet the preset condition, determining that the recognition fails, taking the recognition failure as a target recognition result, and sending the target recognition result to the terminal equipment.
In some embodiments, if the target identification result of successful identification is sent to the terminal device, the successful identification may be prompted on the terminal device. And sending the target identification result of the identification failure to the terminal equipment, and prompting the identification failure on the terminal equipment to acquire data again.
In some embodiments, after determining the target recognition result according to at least one target alignment result, referring to fig. 4 may include the following process:
step 41: and if the target identification result is successful, determining the optimal target comparison result in the at least one target comparison result.
The optimal target comparison result is the target comparison result with the highest similarity in the at least one target comparison result.
Step 42: and judging whether the first reference feature corresponding to the optimal target comparison result has a second reference feature related to the optimal target comparison result.
The optimal target alignment result may be obtained directly from the first reference feature or may be obtained from the first reference feature and the second reference feature. Therefore, it is necessary to determine whether the first reference feature corresponding to the optimal target comparison result has the associated second reference feature.
If the first reference feature corresponding to the best target comparison result has the associated second reference feature, step 43 is executed, and if the first reference feature corresponding to the best target comparison result does not have the associated second reference feature, step 44 is executed.
Step 43: and updating the first reference feature or the second reference feature by using the target feature to be recognized.
When the first reference feature corresponding to the optimal target comparison result is determined to have the associated second reference feature, the optimal target comparison result is obtained by the first reference feature and the second reference feature. Therefore, the first reference feature or the second reference feature corresponding to the optimal target comparison result can be updated by using the target feature to be identified.
In some embodiments, referring to fig. 5, step 43 may be the following flow:
step 51: comparing the first comparison result with the second comparison result.
The first comparison result is obtained by comparing the target feature to be recognized with the first reference feature, and the second comparison result is obtained by comparing the target feature to be recognized with the second reference feature corresponding to the first reference feature.
Step 52: and if the first comparison result is higher than the second comparison result, replacing the second reference feature with the target feature to be identified.
And if the first comparison result is higher than the second comparison result, which indicates that the target feature to be recognized has more feature information than the second benchmark feature, replacing the second benchmark feature with the target feature to be recognized.
Step 53: and if the first comparison result is lower than the second comparison result, replacing the first reference feature with the target feature to be identified.
And if the first comparison result is lower than the second comparison result, which indicates that the target feature to be recognized has more feature information than the first reference feature, replacing the first reference feature with the target feature to be recognized.
Step 44: and taking the target feature to be recognized as a second reference feature.
If the first reference feature corresponding to the optimal target comparison result does not have the associated second reference feature, the target feature to be recognized can be used as the second reference feature to be associated with the first reference feature.
In other embodiments, if the target recognition result is recognition failure, the target feature to be recognized is stored in the target storage space as the first reference feature.
In this embodiment, at least one first reference feature corresponding to an attribute is pre-stored in a storage space, so that when target data to be recognized sent by a terminal device is received, a target storage space corresponding to the target attribute is determined from a plurality of storage spaces, and the target feature to be recognized in the target data to be recognized is compared with the at least one first reference feature in the target storage space to obtain at least one target comparison result, so that the number of times of comparing the target feature to be recognized with the first reference feature is reduced, the comparison efficiency is improved, and further, the target recognition result can be quickly obtained, so that the terminal device can quickly receive the target recognition result, and the user experience is improved.
Furthermore, by combining the first comparison result and the second comparison result, the first reference feature or the second reference feature can be selected to be dynamically updated, so that the latest reference feature can be acquired from the storage space, and further, a real-time updating mode is adopted, and the subsequent comparison accuracy can be improved.
In some embodiments, before receiving target data to be identified sent by a terminal device, referring to fig. 6, fig. 6 is a schematic flowchart of a fourth embodiment of a target identification method provided in the present application. The method comprises the following steps:
step 61: and receiving the image to be processed sent by the terminal equipment.
In some embodiments, the target recognition device needs to store the first reference feature in advance, and therefore needs to collect the image to be processed in advance and extract the corresponding first reference feature from the image to be processed.
Step 62: performing feature extraction on an image to be processed to obtain features to be processed; wherein the feature to be processed is a 128-dimensional feature.
In some embodiments, feature extraction may be performed by using a corresponding feature extraction network to obtain features to be processed. The feature extraction network can be constructed based on a convolutional neural network, and also can be constructed based on a residual error network or a generation countermeasure network.
Specifically, feature extraction is performed by using a corresponding feature extraction network, a floating-point type 512-dimensional feature is extracted, and dimension reduction is performed on the 512-dimensional feature to a 128-dimensional integer feature by using a diversification technology.
And step 63: and storing the to-be-processed features meeting the image quality as first reference features in the corresponding storage spaces.
And when the to-be-processed features are obtained, generating attributes corresponding to the to-be-processed features, and when the to-be-processed features meet the image quality, storing the to-be-processed features as first reference features in a storage space corresponding to the attributes.
In the embodiment, in the target identification device, the 128-dimensional to-be-processed features of the image are extracted by adopting a quantization technology, only feature information data is stored, the original image is not involved, and the privacy of original data can be improved; and the 128-dimensional to-be-processed features occupy smaller storage space, so that the requirement on the storage space can be reduced, and the 128-dimensional to-be-processed features can accelerate the model reasoning processing time. And whether the image quality meets the requirements or not is judged to be used as a condition for storing the image quality into a storage space, so that the storage quality of the stored to-be-processed features is improved, and the subsequent identification accuracy is improved.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of the object recognition device 70 provided in the present application, where: a processor 71, a memory 72 and a communication module 73.
The memory 72 is connected to the processor 71, the memory 72 is used for storing a computer program, and the memory 72 includes a plurality of storage spaces, each of which stores at least one first reference feature corresponding to an attribute in advance.
The communication module 73 is connected to the processor 71 for communicating with the terminal device.
The processor 71 is configured to execute a computer program to control the memory 72 and the communication module 73, so as to implement the following method:
receiving target data to be identified sent by terminal equipment; the target data to be recognized comprises target features to be recognized and target attributes; determining a target storage space corresponding to the target attribute from the plurality of storage spaces; each storage space is pre-stored with at least one first reference feature corresponding to the attribute; comparing the target features to be identified with at least one first reference feature in the target storage space to obtain at least one target comparison result; and determining a target identification result according to the at least one target comparison result, and sending the target identification result to the terminal equipment.
It can be understood that, when the processor 71 is used for executing the computer program, it is also used for implementing the technical solution of any embodiment in the present application, and details are not described here.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application, where the target recognition system 80 includes: a terminal device 81 and an object recognition means 70.
Wherein the object recognition device 70 is in communication connection with the terminal device 81, the object recognition device 70 being the object recognition device 70 in the above described embodiment.
In an application scenario, the terminal device 81 may be a face comparison terminal, and the target recognition device 70 is a server. Therefore, the face comparison terminal is remotely connected with the server. The server is used for storing a plurality of comparison characteristics.
Firstly, images such as identification photos or live photos are taken as a first sample to be put in a storage and sent to a server, extracted floating point type 512-dimensional features are reduced to 128-dimensional integer type features through an diversification technology, so that the storage length of feature storage is reduced, and the storage overhead of a database in the server is effectively reduced. Meanwhile, the image quality of the features to be stored is compared with the preset image quality, whether the storage requirements are met or not is determined, if the storage requirements are not met, the user is prompted to upload the photos again, and the photos are not stored until the server judges that the photos are qualified. At the same time, age information of the image or specific identity information is saved. The first sample is divided into different attributes according to different age groups, such as 0-15, 15-35, 35-60 and 60, so as to identify the characteristics of different crowds, and the 128-dimensional characteristics corresponding to the first sample are respectively stored in the database of the server corresponding to the different attributes.
Then, acquiring external face features through a face comparison terminal, and attaching attribute identification of the face features, wherein the age is used as the attribute identification; the face comparison terminal sends a request to a server; and judging whether to search the database of all storage characteristics in a traversing way or not according to the identification attributes.
Specifically, the server searches a database corresponding to the attribute identifier according to the attribute identifier, and if the database is not searched, the reference features in all databases are selected to be compared with the human face features. And if the face features are searched, selecting the reference features in the corresponding database to be compared with the face features.
And the server compares the received face features sent by the face comparison terminal with the reference features in the corresponding database to generate corresponding comparison results, and the comparison result with the highest similarity in the corresponding database is taken. And judging whether the comparison result with the highest similarity is greater than a set threshold value. If the face features are larger than the set threshold value, the recognition is successful, and the face features are used as second sample data to be stored in a corresponding database. If the comparison fails, the face features are not saved.
Meanwhile, when the first reference features uploaded by the user and the second reference features collected and stored by the face comparison terminal exist in the server, the face comparison terminal is used for comparing the first reference features with the second reference features.
And the feature information acquired from the face comparison terminal is sent to a server, so that the first reference feature and the second reference feature in the corresponding database are retrieved based on the current face feature, and the corresponding similarity is calculated. And taking the weighted average of the two similarities to obtain the final similarity, and judging whether the final similarity is greater than a set threshold value or not, and if so, determining that the comparison is successful. And if the similarity of the first reference feature is higher than that of the second reference feature, replacing the second reference feature with the current face feature value for storage, and deleting the original second reference feature to update the database.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided by the present application, the computer-readable storage medium 90 is used for storing a computer program 91, and the computer program 91 is used for implementing the following method steps when being executed by a processor:
receiving target data to be identified sent by terminal equipment; the target data to be recognized comprises target characteristics to be recognized and target attributes; determining a target storage space corresponding to the target attribute from the plurality of storage spaces; each storage space is pre-stored with at least one first reference feature corresponding to the attribute; comparing the target features to be identified with at least one first reference feature in the target storage space to obtain at least one target comparison result; and determining a target identification result according to the at least one target comparison result, and sending the target identification result to the terminal equipment.
It is understood that the computer program 91, when being executed by a processor, is also adapted to implement the solution of any of the embodiments of the present application.
The technical scheme of the application can be applied to scenes such as security and protection control, intelligent building access control system, authentication unification.
In summary, the technical solution of the present application does not relate to image information of the original data, but only relates to the feature data in the process of performing the related identity authentication, and can improve the privacy of the original data. Uploading first sample data through terminal equipment to extract a characteristic value, reducing storage space through a quantization technology, and simultaneously distinguishing databases storing different types of characteristics by using attribute identification; when the current face characteristic values sampled by the terminal equipment are compared, selecting a comparison database range according to the attribute identification for accurate comparison, and improving comparison efficiency; and after the first reference feature and the second reference feature exist at the same time, the first reference feature or the second reference feature can be updated in real time through the comparison result, and the feature comparison accuracy is improved.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units in the other embodiments described above may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (11)

1. A method of object recognition, the method comprising:
receiving target data to be identified sent by terminal equipment; the target data to be recognized comprises target features to be recognized and target attributes;
determining a target storage space corresponding to the target attribute from a plurality of storage spaces; at least one first reference feature corresponding to the attribute is stored in each storage space in advance;
comparing the target feature to be identified with the at least one first reference feature in the target storage space to obtain at least one target comparison result;
and determining a target identification result according to the at least one target comparison result, and sending the target identification result to the terminal equipment.
2. The method according to claim 1, wherein the comparing the target feature to be identified with the at least one first reference feature in the target storage space to obtain at least one target comparison result comprises:
judging whether a second reference feature exists in the target storage space or not; wherein each said second reference feature is associated with one said first reference feature;
if so, comparing the target feature to be identified with the at least one first reference feature in the target storage space to obtain at least one first comparison result; comparing the target feature to be identified with the second reference feature to obtain at least one second comparison result;
and obtaining at least one target comparison result according to the correlation between the at least one first comparison result and the at least one second comparison result.
3. The method of claim 2, wherein obtaining at least one target alignment result according to the correlation between the at least one first alignment result and the at least one second alignment result comprises:
and carrying out weighted average on the first comparison result with the correlation and the first comparison result to obtain the target comparison result.
4. The method of claim 3, wherein the target recognition result comprises a recognition success or a recognition failure; after determining the target identification result according to the at least one target comparison result, the method includes:
if the target identification result is successful, determining the optimal target comparison result in at least one target comparison result;
judging whether the first reference feature corresponding to the optimal target comparison result has the associated second reference feature or not;
if so, updating the first reference feature or the second reference feature by using the target feature to be recognized;
and if not, taking the target feature to be recognized as the second reference feature.
5. The method according to claim 4, wherein the updating the first reference feature or the second reference feature by using the target feature to be recognized comprises:
comparing the first alignment result with the second alignment result;
if the first comparison result is higher than the second comparison result, replacing the second reference feature with the target feature to be identified;
and if the first comparison result is lower than the second comparison result, replacing the first reference feature with the target feature to be identified.
6. The method of claim 3, wherein after determining the target identification result according to the at least one target comparison result, further comprising:
and if the target recognition result is recognition failure, storing the target feature to be recognized as the first reference feature in the target storage space.
7. The method of claim 1, further comprising:
if the target storage space corresponding to the target attribute does not exist in the plurality of storage spaces, comparing the target feature to be identified with the at least one first reference feature in each storage space to obtain at least one target comparison result.
8. The method of claim 1, wherein before the receiving the target data to be identified sent by the terminal device, the method comprises:
receiving an image to be processed sent by terminal equipment;
extracting the features of the image to be processed to obtain the features to be processed; wherein the feature to be processed is a 128-dimensional feature;
and storing the to-be-processed features meeting the image quality as the first reference features in the corresponding storage space.
9. An object recognition apparatus, characterized in that the object recognition apparatus comprises:
a processor;
the memory is connected with the processor and used for storing a computer program, and the memory comprises a plurality of storage spaces, and each storage space is stored with at least one first reference feature corresponding to the attribute in advance;
the communication module is connected with the processor and is used for communicating with the terminal equipment;
wherein the processor is configured to execute the computer program to control the memory and the communication module to implement the method according to any one of claims 1 to 8.
10. An object recognition system, characterized in that the object recognition system comprises:
a terminal device;
object recognition means, communicatively connected to said terminal device, said object recognition means being as claimed in claim 9.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium is used for storing a computer program which, when being executed by a processor, is used for carrying out the method according to any one of the claims 1-8.
CN202211126060.9A 2022-09-15 2022-09-15 Target identification method, device, system and computer readable storage medium Pending CN115578765A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211126060.9A CN115578765A (en) 2022-09-15 2022-09-15 Target identification method, device, system and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211126060.9A CN115578765A (en) 2022-09-15 2022-09-15 Target identification method, device, system and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115578765A true CN115578765A (en) 2023-01-06

Family

ID=84581685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211126060.9A Pending CN115578765A (en) 2022-09-15 2022-09-15 Target identification method, device, system and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115578765A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116910695A (en) * 2023-09-11 2023-10-20 哈尔滨工程大学三亚南海创新发展基地 Marking method of equipment monitoring result and checking method of equipment monitoring data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116910695A (en) * 2023-09-11 2023-10-20 哈尔滨工程大学三亚南海创新发展基地 Marking method of equipment monitoring result and checking method of equipment monitoring data
CN116910695B (en) * 2023-09-11 2024-01-05 哈尔滨工程大学三亚南海创新发展基地 Marking method of equipment monitoring result and checking method of equipment monitoring data

Similar Documents

Publication Publication Date Title
TWI756687B (en) Coding model training method and device for preventing privacy data leakage
US10402627B2 (en) Method and apparatus for determining identity identifier of face in face image, and terminal
CN108491794B (en) Face recognition method and device
CN110188829B (en) Neural network training method, target recognition method and related products
CN110348362B (en) Label generation method, video processing method, device, electronic equipment and storage medium
CN113705425B (en) Training method of living body detection model, and method, device and equipment for living body detection
CN110096996B (en) Biological information identification method, device, terminal, system and storage medium
CN111368133B (en) Method and device for establishing index table of video library, server and storage medium
US11335127B2 (en) Media processing method, related apparatus, and storage medium
KR20220076398A (en) Object recognition processing apparatus and method for ar device
CN116110100B (en) Face recognition method, device, computer equipment and storage medium
CN111401193B (en) Method and device for acquiring expression recognition model, and expression recognition method and device
CN115578765A (en) Target identification method, device, system and computer readable storage medium
CN111368867A (en) Archive classification method and system and computer readable storage medium
CN113128526A (en) Image recognition method and device, electronic equipment and computer-readable storage medium
CN113656927B (en) Data processing method, related device and computer storage medium
CN115830342A (en) Method and device for determining detection frame, storage medium and electronic device
CN114220045A (en) Object recognition method, device and computer-readable storage medium
CN114140822A (en) Pedestrian re-identification method and device
CN113392867A (en) Image identification method and device, computer equipment and storage medium
CN111414952A (en) Noise sample identification method, device, equipment and storage medium for pedestrian re-identification
CN113971422A (en) Sample data labeling system, method and related equipment
CN116633809B (en) Detection method and system based on artificial intelligence
CN116434313B (en) Face recognition method based on multiple face recognition modules
CN113011301A (en) Living body identification method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination