CN111198963A - Target retrieval method and device based on average characteristics and related equipment thereof - Google Patents

Target retrieval method and device based on average characteristics and related equipment thereof Download PDF

Info

Publication number
CN111198963A
CN111198963A CN201911281962.8A CN201911281962A CN111198963A CN 111198963 A CN111198963 A CN 111198963A CN 201911281962 A CN201911281962 A CN 201911281962A CN 111198963 A CN111198963 A CN 111198963A
Authority
CN
China
Prior art keywords
average
feature
target image
target
dimension
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911281962.8A
Other languages
Chinese (zh)
Inventor
丁建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Athena Eyes Co Ltd
Original Assignee
Athena Eyes Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Athena Eyes Co Ltd filed Critical Athena Eyes Co Ltd
Priority to CN201911281962.8A priority Critical patent/CN111198963A/en
Publication of CN111198963A publication Critical patent/CN111198963A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the technical field of face matching, and provides a target retrieval method, a device and related equipment based on average characteristics, wherein the target retrieval method comprises the following steps: extracting a target image from a plurality of acquired images of the same target; processing the target image to obtain first normalized features of the target image on all dimensions; calculating first average characteristics of the target image on each dimension through the first normalized characteristics; acquiring a pre-stored average value of the features on the corresponding dimension from a database, and coding the first average feature according to the acquired average value to obtain a feature code; and comparing the feature codes of the target image in each dimension with the feature codes of different users in the corresponding dimension in the database to calculate the similarity between the target and the different users in the database, and determining the user with the maximum similarity as the target user. By implementing the invention, the problem of low recognition efficiency of the existing face dynamic recognition method can be solved.

Description

Target retrieval method and device based on average characteristics and related equipment thereof
Technical Field
The invention relates to the technical field of face matching, in particular to a target retrieval method and device based on average features and related equipment.
Background
With the rapid development of computer vision technology and deep learning technology, more and more vision algorithm applications fall on the ground, wherein the application of the human face algorithm is the most extensive. In the field of public security, such as subway stations, high-speed railway stations, airports, customs ports and the like, both station entrance face brushing is used for identity verification (1: 1 verification) and blacklist (evasion, smuggling passenger, pickpocket and the like) monitoring is called dynamic face recognition.
At present, a plurality of images with high quality are generally selected from captured face images, and then the plurality of images are compared with images in a base library respectively to determine whether an average score exceeds a set threshold value to determine whether a hit is found. However, the base library is large, for example, various black lists and white single base libraries are hundreds of thousands of times, for some application scenarios, the base library is more rolling, people passing through every day are accumulated, the number of the base library reaches millions or even tens of millions, meanwhile, the number of people shot by the camera is millions, the number of comparison times is large, and the identification process is slow.
In summary, although the above method can be used for dynamic face recognition, when the method is used for comparing an excessive number of images, a large delay is caused, and thus, the existing face dynamic recognition method has a problem of low recognition efficiency.
Disclosure of Invention
The invention provides a target retrieval method, a target retrieval device and related equipment based on average characteristics, and aims to solve the problem of low recognition efficiency of the existing face dynamic recognition method.
A first embodiment of the present invention provides an average feature-based target retrieval method, including:
extracting a target image from a plurality of acquired images of the same target;
processing the target image to obtain first normalized features of the target image on all dimensions;
calculating first average characteristics of the target image on each dimension through the first normalized characteristics;
acquiring a pre-stored average value of the features on the corresponding dimension from a database, and coding the first average feature according to the acquired average value to obtain a feature code of the first average feature of the target image on each dimension;
comparing the feature codes of the target image in each dimension with the feature codes of different users in the corresponding dimension in the database, calculating the similarity between the target and different users in the database according to the comparison result, and determining the user with the maximum similarity as the target user.
A second embodiment of the present invention provides an average feature-based target retrieval method and apparatus, including:
the target image acquisition module is used for extracting a target image from a plurality of acquired images of the same target;
the first normalized feature acquisition module is used for processing the target image to obtain first normalized features of the target image in each dimension;
the first average characteristic acquisition module is used for calculating first average characteristics of the target image on each dimension through the first normalized characteristics;
the characteristic code acquisition module is used for acquiring the average value of the pre-stored characteristics on the corresponding dimension from the database, and encoding the first average characteristic according to the acquired average value to obtain the characteristic code of the first average characteristic of the target image on each dimension;
and the target user determining module is used for comparing the feature codes of the target image in each dimension with the feature codes of different users in the corresponding dimension in the database, calculating the similarity between the target and the different users in the database according to the comparison result, and determining the user with the maximum similarity as the target user.
A third embodiment of the present invention provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the steps of the target retrieval method based on average characteristics provided by the first embodiment of the present invention.
A fourth embodiment of the present invention provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of the target retrieval method based on average features provided by the first embodiment of the present invention.
In the average feature-based target retrieval method, the average feature-based target retrieval device and the related equipment, a target image is extracted from a plurality of acquired images of the same target, then the target image is processed to obtain a first normalized feature of the target image in each dimension, then the first normalized feature is used for calculating the first average feature of the target image in each dimension, then the average value of the pre-stored features in the corresponding dimension is acquired from a database, the first average feature is coded according to the acquired average value to obtain the feature code of the first average feature of the target image in each dimension, finally the feature code of the target image in each dimension is compared with the feature code of different users in the corresponding dimension in the database, the similarity between the target and different users in the database is calculated according to the comparison result, and determining the user with the maximum similarity as the target user. The obtained image and the image in the database are represented by the average characteristic, and the average characteristic is coded.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic diagram of an application environment of a target retrieval method based on average features according to a first embodiment of the present invention;
FIG. 2 is a flowchart of an average feature based object retrieval method according to a first embodiment of the present invention;
FIG. 3 is a flowchart of step 11 of the target retrieval method based on average features according to the first embodiment of the present invention;
FIG. 4 is a flowchart of step 14 of the average feature-based object retrieval method according to the first embodiment of the present invention;
FIG. 5 is a flowchart of step 15 of the average feature-based object retrieval method according to the first embodiment of the present invention;
FIG. 6 is a flowchart of a target retrieval method based on average features according to the first embodiment of the present invention;
FIG. 7 is a flowchart of a target retrieval method based on average features according to the first embodiment of the present invention;
FIG. 8 is a block diagram of an apparatus of a target retrieval method based on average features according to a second embodiment of the present invention;
FIG. 9 is a schematic block diagram of an apparatus of a target retrieval method based on average features according to a second embodiment of the present invention;
fig. 10 is a block diagram of a computer device according to a third embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The target retrieval method based on average characteristics provided by the first embodiment of the present invention can be applied to the application environment shown in fig. 1, in which a client (computer device) communicates with a server through a network. The method comprises the steps that a server extracts a target image from a plurality of images of the same target input by a client, processes the target image to obtain first normalized features of the target image on all dimensions, calculates the first average features of the target image on all dimensions through the first normalized features, obtains a pre-stored average value of the features on corresponding dimensions from a database, codes the first average features according to the obtained average value to obtain feature codes of the first average features of the target image on all dimensions, compares the feature codes of the target image on all dimensions with feature codes of different users on corresponding dimensions in the database, calculates the similarity of the target and the different users in the database according to the comparison result, determines the user with the maximum similarity as the target user, and sends the target user to the client. Among them, the client (computer device) may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The server can be implemented by an independent server or a server cluster composed of a plurality of servers.
It should be noted that fig. 1 only shows an application environment schematic diagram of the present embodiment, and the present embodiment may also be applied to other environments without any limitation to the implementation environment of the present embodiment. For example, the camera may acquire a target image from a plurality of images of the same target, and send the acquired target image to the server for processing to acquire the target user.
In a first embodiment of the present invention, as shown in fig. 2, an object retrieval method based on average features is provided, which is described by taking the application of the method to the server side in fig. 1 as an example, and includes the following steps 11 to 15.
Step 11: and extracting a target image from the acquired multiple images of the same target.
The target is an object to be searched, and when the number of acquired images is multiple, the target image should be extracted from each image. The target image is a face region in the image, that is, the image should include a face. In addition, when one image includes a plurality of face regions, a plurality of target images can be extracted from one image, and the method is not particularly limited herein.
Further, as an embodiment of the present embodiment, as shown in fig. 3, the step 11 specifically includes the following steps 111 to 113.
Step 111: multiple images of the same target are acquired.
Specifically, a plurality of images of the same target may be acquired in a plurality of ways. For example, the plurality of images including the face of the target may be obtained from consecutive frame images of a video captured by a camera, the plurality of images including the face of the target may be obtained from pre-stored images, or the plurality of images including the face of the target may be obtained by searching in a network, which is not limited in this respect.
Step 112: and respectively identifying the face areas of the plurality of images.
The face position in the image can be identified through various identification tools according to skin color, facial features, contours and the like in the image, so that the face region in the image is obtained through division. The recognition tool may be a Single shot Detector (SSD model), opencv (Open Source computer vision Library), Python (Python), and the like.
Step 113: and when the face area of the image meets the preset requirement, taking the face area as a target image.
Specifically, when the face region of the image meets the quality evaluation requirement, the face region is intercepted, and the intercepted image is used as a target image. Further, when the quality of the face region of the image is evaluated, the angle value, the face size, the image ambiguity and the face shielding condition of the face region can be evaluated, and when the angle value, the face size, the image ambiguity and the face shielding condition of the face region all meet the evaluation requirements, the face region of the image meets the preset requirements.
Through the implementation of the steps 111 to 113, the target image can be extracted from the acquired image, the definition of the target image meets the requirement of subsequent processing on the target image, and the accuracy of obtaining the target user is improved.
Step 12: and processing the target image to obtain first normalized features of the target image in each dimension.
Specifically, the features of the target image in each dimension are extracted, and normalization processing is performed on the features of the target image in each dimension, so that first normalized features of the target image in each dimension are obtained. It should be noted that each dimension described in the present embodiment may be a predefined category for summarizing descriptions of the target image from various aspects, and a corresponding description can be obtained in each category.
Further, in this embodiment, the step 12 includes: the method comprises the steps of firstly obtaining pixel information of a target image, then obtaining red, green and blue three-channel information of each pixel in the target image, analyzing the red, green and blue three-channel information to obtain characteristics of the target image on each dimension, and carrying out normalization processing on the characteristics of the target image on each dimension to obtain first normalization characteristics of the target image on each dimension. Specifically, L2normalization (L2 normalization) processing is performed on the features of the target image in each dimension to obtain specific quantized values of the target image in each dimension, and the specific quantized values of the target image in each dimension are used as first normalized features of the target image in each dimension.
Step 13: and calculating a first average characteristic of the target image in each dimension through the first normalized characteristic.
The method specifically comprises the steps of firstly summing first normalized features of a plurality of target images in each dimension, then calculating an average value of the first normalized features of the plurality of target images in each dimension, and taking the average value of the first normalized features in each dimension as the first average feature of the target images in each dimension.
In order to more clearly understand the above step 13, examples are listed: first three target images are respectively marked as A, B and C, each target image has first normalized features in four dimensions, the four dimensions are respectively A, B, C, D, wherein the first normalized features of the target image A in the dimensions A, B, C and D are respectively 0.1, 0.5, 0.7 and 0.6, the first normalized features of the target image A in the dimensions A, B, C and D are respectively 0.2, 0.3, 0.8 and 0.5, the first normalized features of the target image C in the dimensions A, B, C and D are respectively 0.8, 0.9, 0.4 and 0.6, the sum of the first normalized features of the target image A, the target image C and the target image C in the dimension A is calculated to be 1.1(0.1+0.2+0.8 is 1.1), and the sum of the first normalized features of the target image A, the target image A and the target image C in the dimension A is calculated to be 1.1, Summing first normalized features of the target image A in the dimension B to obtain a sum value of the dimension B of 1.2(0.5+0.3+0.4 being 1.2), summing the first normalized features of the target image A, the target image already and the target image C in the dimension C to obtain a sum value of the dimension C of 1.9(0.7+0.8+0.4 being 1.9), summing the first normalized features of the target image A, the target image already and the target image C in the dimension D to obtain a sum value of the dimension D of 1.7(0.6+0.5+0.6 being 1.7), further computing the average values of the three target images in the dimensions A, B, C and D of 0.275, 0.3, 0.475 and 0.425 respectively, and taking the average values in the dimensions A, B, C and D of 0.275, 0.3, 0.425, 0.475 and 0.425 as the first features of the target images in the dimensions A, B, C and D respectively.
Step 14: and acquiring the average value of the pre-stored features on the corresponding dimension from the database, and coding the first average feature according to the acquired average value to obtain the feature code of the first average feature of the target image on each dimension.
The average value of the features on the corresponding dimension, which is pre-stored in the database, is the average value of the features of all the images in the database on each dimension.
Further, as an implementation manner of this embodiment, as shown in fig. 4, the step 14 specifically includes the following steps 141 to 142.
Step 141: and acquiring the pre-stored average value of the features in the corresponding dimension from the database.
Wherein, the obtained average value of the features in the corresponding dimension should be in one-to-one correspondence with the first average feature of the target image in each dimension.
Step 142: and comparing the characteristic value of each dimension of the first average characteristic with the average value of the corresponding dimension.
Specifically, the magnitude of the feature value of each dimension of the first average feature is compared with the magnitude of the average value of the corresponding dimension, so that a comparison result of the feature value of each dimension of the first average feature and the average value of the corresponding dimension is obtained.
Step 143: and respectively coding each dimension of the first average feature according to the comparison result of the feature value of each dimension of the first average feature and the average value of the corresponding dimension so as to obtain the feature code of the first average feature of the target image in each dimension.
Specifically, 0/1 encoding is performed on the first average feature, so as to obtain feature encoding of the first average feature of the target image in each dimension. When the feature value of the first average feature in a certain dimension is larger than the average value in the dimension, the first average feature in the dimension is coded as '1', and when the feature value of the first average feature in the certain dimension is smaller than the average value in the dimension, the first average feature in the dimension is coded as '0', and feature codes of the first average feature of the target image in each dimension are obtained through one-to-one comparison. It should be noted that the 0/1 encoding method can improve the calculation speed.
Through the implementation of the steps 141 to 143, the features of the target image in each dimension can be represented by the same code, and the features of the target image in each dimension are highly compared with the features of the image stored in the database in each dimension, so that the correlation between the target image and the image in the database is improved, and the speed of querying and calculating the target user and the accuracy of obtaining the target user are improved.
Step 15: comparing the feature codes of the target image in each dimension with the feature codes of different users in the corresponding dimension in the database, calculating the similarity between the target and different users in the database according to the comparison result, and determining the user with the maximum similarity as the target user.
Where the target user represents a user who wants to search for query acquisition. For each user, the feature codes in the respective dimensions are stored in the database, that is, each user can be represented by the feature codes in the respective dimensions.
Further, as an implementation manner of this embodiment, as shown in fig. 5, the step 15 specifically includes the following steps 151 to 153.
Step 151: and calculating the Hamming distance between the feature code of the target image and the feature code of each user pre-stored in the database.
And the distance between the feature code of the Hamming distance target image and the feature code of each user pre-stored in the database. Generally, the smaller the hamming distance, the closer the feature code of the target image is to the feature code of the user.
Step 152: and when the Hamming distance between the feature code of the calculated target image and the feature code of the user is smaller than a preset threshold value, calculating the cosine distance between the feature code of the target image and the feature code of the user, and taking the cosine distance as the similarity.
Step 153: and determining the user with the maximum cosine distance as the target user.
Through the implementation of the above steps 151 to 153, the hamming distance is calculated first, and then the cosine distance between the feature code of the target image and the feature code of the user is calculated according to the size of the hamming distance, so as to obtain the target user. Because the speed of calculating the Hamming distance is high, the Hamming distance is calculated first, then the pre-distance is calculated, and two methods for calculating the similarity degree are applied to the embodiment, so that the calculation result is more reasonable, the calculation speed is increased, and the speed and the accuracy of inquiring and calculating the target user are increased.
Through the implementation of the steps 11 to 15, the acquired images and the images in the database can be represented by average features, the average features are encoded, and the retrieval is performed through the feature codes, so that the feature codes can be obtained from a plurality of images and then retrieved no matter how many images are input, which is equivalent to the retrieval performed by one image, the calculation amount is reduced, and the problem of low recognition efficiency of the existing face dynamic recognition method is effectively solved.
Further, as an embodiment of this embodiment, as shown in fig. 6, between the above step 11 to step 12, the following steps 21 to step 22 are further included.
Step 21: and carrying out face detection on the target image to obtain the face position of the target image.
Specifically, the face contour in the target image is retrieved to obtain the position of the face contour in the target image.
Step 22: and obtaining a key positioning point of the face according to the face position.
The key positioning points of the face may be preset specific points of the face, such as eyes, nose, mouth, and the like. It should be noted that the number of the face key positioning points may be multiple, and when a certain face key positioning point cannot be found in the face position, other face key positioning points may be selected, that is, when there are multiple face key positioning points, an optimal face key positioning point may be predefined.
Step 23: and correcting the target image according to the key positioning points of the human face.
Specifically, the angle of the face is obtained according to the key positioning point of the face, and the target image is deflected according to the angle of the face, so that the face in the target image meets the specified requirements.
Through the implementation of the steps 21 to 23, the target image can be corrected, so that the subsequent extraction of the human face features is easier and more accurate, and the accuracy of inquiring and calculating the target user is improved.
Further, as an implementation manner of the present embodiment, as shown in fig. 7, the average values of the features in the respective dimensions stored in advance in the database includes the following steps 31 to 33.
Step 31: all images stored in the database are obtained.
Step 32: respectively processing the stored images to obtain second normalized features of the stored images in each dimension;
step 33: and respectively calculating the average characteristic value of all the second normalized characteristics of all the stored images in each dimension, and storing the average characteristic value in each dimension as the average value of the characteristics in each dimension in the database.
It should be noted that, since the method for obtaining the average value of the features in each dimension in the database in the above steps 31 to 33 is similar to the method for obtaining the first average feature of the target image in each dimension in the above steps 11 to 13, the description is omitted here.
Through the implementation of the above steps 31 to 33, the average value of the features of all the images stored in the database in each dimension can be obtained, and at the same time, the image of each user can be represented by the feature code, so that the subsequent retrieval by the feature code can be realized, and the target user can be obtained.
In addition, in order to explain that the method for calculating the similarity based on the average feature in the embodiment can obtain the accurate target user by obtaining the search result, the same search effect as that of directly using the feature to perform the similarity so as to obtain the target user can be achieved.
Examples are listed: respectively carrying out normalization processing on two target images (with three dimensions) of the same target to obtain a first normalized feature (x1, x2, x3) in the three dimensions and a first normalized feature (y1, y2, y3) in the three dimensions, wherein the following conditions are satisfied: x1 x1+ x2 x2+ x3 x 3-y 1 y1+ y2 y2+ y3 y 3-1, and a second normalized feature (z1, z2, z3) of a user in three dimensions is obtained from the database, and the second normalized feature (z1, z2, z3) in three dimensions also satisfies z1 z1+ z2 z2+ z3 z 3-1. Where x1, y1, z1 represent eigenvalues in a first dimension, x2, y2, z2 represent eigenvalues in a second dimension, and x3, y3, z3 represent eigenvalues in a third dimension.
Calculating cosine similarities between the first normalized feature (x1, x2, x3) and the first normalized feature (y1, y2, y3) and the second normalized feature (z1, z2, z3), summing the two cosine similarities, and then averaging, wherein the average is shown in the following formula (1):
Figure BDA0002317001730000131
as can be seen from the above formula (1), the average value of the feature similarity is
Figure BDA0002317001730000132
Further, the average features of the first normalized features (x1, x2, x3) and the first normalized features (y1, y2, y3) are calculated, and then the cosine similarity between the calculated average features and the second normalized features (z1, z2, z3) is specifically shown in the following formula (2):
Figure BDA0002317001730000133
as can be seen from the above formula (2), the result of similarity calculation based on the average feature is
Figure BDA0002317001730000134
Obtained by the above formula (1)
Figure BDA0002317001730000135
Obtained by the above formula (2)
Figure BDA0002317001730000136
The difference of (2) is the modular length of the average feature, and since x1 x1+ x2 x2+ x3 x3 ═ y1 y1+ y2 y2+ y3 y3 ═ z1 z1+ z2 z2+ z3 ═ z3 ═ 1, in the present embodiment, the similarity is calculated by the method based on the average feature, so that the search result can obtain an accurate target user, and the similarity with the feature directly used can be achieved, so that the same result as the similarity with the feature directly used can be obtained.
It should be noted that, when calculating the hamming distance, the modular length does not affect the sign of hamming code, and when calculating the cosine similarity, the modular length does not affect the relative relationship, and when multiplying all cosine similarities by the modular length, the result same as the similarity obtained by directly using the features can be obtained.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
A second embodiment of the present invention provides an average feature-based object retrieval method device, which corresponds to the average feature-based object retrieval method provided in the first embodiment one to one.
Further, as shown in fig. 8, the target retrieval method device based on average features includes a target image obtaining module 41, a first normalized feature obtaining module 42, a first average feature obtaining module 43, a feature code obtaining module 44, and a target user determining module 45. The functional modules are explained in detail as follows:
a target image obtaining module 41, configured to extract a target image from a plurality of obtained images of the same target;
the first normalized feature obtaining module 42 is configured to process the target image to obtain first normalized features of the target image in each dimension;
a first average feature obtaining module 43, configured to calculate a first average feature of the target image in each dimension through the first normalized feature;
the feature code obtaining module 44 is configured to obtain a pre-stored average value of the features in the corresponding dimension from the database, and code the first average feature according to the obtained average value to obtain a feature code of the first average feature of the target image in each dimension;
and the target user determining module 45 is configured to compare the feature codes of the target image in each dimension with the feature codes of different users in the database in the corresponding dimension, calculate the similarity between the target and the different users in the database according to the comparison result, and determine the user with the largest similarity as the target user.
Further, as an embodiment of the present embodiment, as shown in fig. 9, the target image acquiring module 41 includes an image acquiring unit 411, a recognizing unit 412, and a target image acquiring unit 413. The detailed function of each functional unit is described as follows:
an image acquisition unit 411 configured to acquire a plurality of images of the same target;
an identifying unit 412, configured to identify face regions of the multiple images respectively;
and the target image acquiring unit 413 is used for taking the face area of the image as the target image when the face area meets the preset requirement.
Further, as an implementation manner of the present embodiment, the target retrieval method device based on average features further includes a face position obtaining module, a face key positioning point obtaining module, and a correction module. The detailed functions of the functional modules are as follows:
the face position acquisition module is used for carrying out face detection on the target image to obtain the face position of the target image;
the face key positioning point acquisition module is used for obtaining a face key positioning point according to the face position;
and the correction module is used for correcting the target image according to the key positioning points of the human face.
Further, as an implementation manner of the present embodiment, the feature code obtaining module 44 includes an average value obtaining unit, an comparing unit, and a feature code obtaining unit. The detailed functions of the functional units are as follows:
the average value acquisition unit is used for acquiring the average value of the pre-stored characteristics on the corresponding dimension from the database;
the comparison unit is used for comparing the characteristic value of each dimension of the first average characteristic with the average value of the corresponding dimension;
and the feature code acquisition unit is used for respectively coding each dimension of the first average feature according to the comparison result of the feature value of each dimension of the first average feature and the average value of the corresponding dimension so as to obtain the feature code of the first average feature of the target image in each dimension.
Further, as an embodiment of the present embodiment, the target user determining module 45 includes a hamming distance calculating unit, a similarity obtaining unit and a target user determining unit. The detailed functions of the functional units are as follows:
the Hamming distance calculating unit is used for calculating the Hamming distance between the feature code of the target image and the feature code of each user pre-stored in the database;
the similarity obtaining unit is used for calculating the cosine distance between the feature code of the target image and the feature code of the user when the Hamming distance between the feature code of the target image and the feature code of the user is smaller than a preset threshold value, and taking the cosine distance as the similarity;
and the target user determining unit is used for determining the user with the maximum cosine distance as the target user.
Further, as an implementation manner of the present embodiment, the target retrieval method device based on average features further includes an image obtaining module, a second normalized feature obtaining module, and an average value obtaining module. The detailed functions of the functional modules are as follows:
the image acquisition module is used for acquiring all images stored in the database;
the second normalized feature acquisition module is used for respectively processing the stored images to obtain second normalized features of the stored images in all dimensions;
and the average value acquisition module is used for respectively calculating the average characteristic values of all the second normalized characteristics of all the stored images in all the dimensions, and storing the average characteristic values in all the dimensions as the average values of the characteristics in all the dimensions in the database.
For the specific definition of the target retrieval method device based on the average characteristic, reference may be made to the above definition of the target retrieval method based on the average characteristic, and details are not repeated here. The modules in the above-mentioned target retrieval method device based on average characteristics can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
A third embodiment of the present invention provides a computer device, which may be a server, and the internal structure diagram of which may be as shown in fig. 10. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data involved in the target retrieval method based on the average characteristics. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement the average feature-based object retrieval method provided by the first embodiment of the present invention.
A fourth embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, the computer program, when being executed by a processor, implementing the steps of the average feature-based object retrieval method provided by the first embodiment of the present invention, such as steps 11 to 15 shown in fig. 2, steps 111 to 113 shown in fig. 3, steps 21 to 23 shown in fig. 4, steps 141 to 143 shown in fig. 5, steps 151 to 153 shown in fig. 6, and steps 31 to 33 shown in fig. 7. Alternatively, the computer program, when executed by a processor, implements the functions of the modules/units of the average feature-based object retrieval method provided by the first embodiment described above. To avoid repetition, further description is omitted here.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. An object retrieval method based on average features, the document retrieval method comprising:
extracting a target image from a plurality of acquired images of the same target;
processing the target image to obtain first normalized features of the target image on all dimensions;
calculating a first average characteristic of the target image in each dimension through the first normalized characteristic;
acquiring a pre-stored average value of the features on the corresponding dimension from a database, and coding the first average feature according to the acquired average value to obtain a feature code of the first average feature of the target image on each dimension;
comparing the feature codes of the target image in each dimension with the feature codes of different users in the database in the corresponding dimension, calculating the similarity between the target and the different users in the database according to the comparison result, and determining the user with the maximum similarity as the target user.
2. The object retrieval method according to claim 1, wherein the step of extracting the object image from the acquired plurality of images of the same object comprises:
acquiring a plurality of images of the same target;
respectively identifying the human face areas of the plurality of images;
and when the face area of the image meets a preset requirement, taking the face area as the target image.
3. The object retrieval method of claim 1, further comprising:
carrying out face detection on the target image to obtain the face position of the target image;
obtaining a key positioning point of the face according to the face position;
and correcting the target image according to the human face key positioning point.
4. The target retrieval method of claim 1, wherein the step of obtaining the pre-stored average value of the features in the corresponding dimension from the database, and coding the first average feature according to the obtained average value to obtain the feature code of the first average feature of the target image in each dimension comprises:
acquiring the average value of the pre-stored characteristics on the corresponding dimension from the database;
comparing the feature values of the first average feature in each dimension with the average values in the corresponding dimension;
and respectively coding each dimension of the first average feature according to the comparison result of the feature value of each dimension of the first average feature and the average value of the corresponding dimension to obtain the feature code of the first average feature of the target image in each dimension.
5. The target retrieval method of claim 1, wherein the step of comparing the feature codes of the target image in each dimension with feature codes of different users in a database in a corresponding dimension, calculating the similarity between the target and the different users in the database according to the comparison result, and determining the user with the maximum similarity as the target user comprises:
calculating the Hamming distance between the feature code of the target image and the feature code of each user pre-stored in the database;
when the hamming distance between the feature code of the target image and the feature code of the user is calculated to be smaller than a preset threshold value, calculating the cosine distance between the feature code of the target image and the feature code of the user, and taking the cosine distance as the similarity;
and determining the user with the maximum cosine distance as the target user.
6. The object retrieval method of claim 1, wherein obtaining the average values of the features in the respective dimensions pre-stored in the database comprises:
obtaining all images stored in the database;
respectively processing the stored images to obtain second normalized features of the stored images in each dimension;
and respectively calculating the average characteristic value of all the second normalized characteristics of all the stored images in each dimension, and storing the average characteristic value in each dimension as the average value of the characteristics in each dimension in the database.
7. An object retrieval method device based on average features is characterized by comprising the following steps:
the target image acquisition module is used for extracting a target image from a plurality of acquired images of the same target;
the first normalized feature acquisition module is used for processing the target image to obtain first normalized features of the target image in each dimension;
the first average feature acquisition module is used for calculating first average features of the target image in all dimensions through the first normalized features;
the feature code acquisition module is used for acquiring a pre-stored average value of the features on the corresponding dimension from a database, and encoding the first average feature according to the acquired average value to obtain a feature code of the first average feature of the target image on each dimension;
and the target user determination module is used for comparing the feature codes of the target image in each dimension with the feature codes of different users in the corresponding dimension in the database, calculating the similarity between the target and the different users in the database according to the comparison result, and determining the user with the maximum similarity as the target user.
8. The object retrieval device according to claim 7, wherein the object image acquisition module includes:
the image acquisition unit is used for acquiring a plurality of images of the same target;
the recognition unit is used for respectively recognizing the face areas of the plurality of images;
and the target image acquisition unit is used for taking the face area of the image as the target image when the face area meets the preset requirement.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor when executing the computer program implements the steps of the target retrieval method based on average features according to any of claims 1 to 6.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the average feature based object retrieval method according to any one of claims 1 to 6.
CN201911281962.8A 2019-12-11 2019-12-11 Target retrieval method and device based on average characteristics and related equipment thereof Pending CN111198963A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911281962.8A CN111198963A (en) 2019-12-11 2019-12-11 Target retrieval method and device based on average characteristics and related equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911281962.8A CN111198963A (en) 2019-12-11 2019-12-11 Target retrieval method and device based on average characteristics and related equipment thereof

Publications (1)

Publication Number Publication Date
CN111198963A true CN111198963A (en) 2020-05-26

Family

ID=70744250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911281962.8A Pending CN111198963A (en) 2019-12-11 2019-12-11 Target retrieval method and device based on average characteristics and related equipment thereof

Country Status (1)

Country Link
CN (1) CN111198963A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084939A (en) * 2020-09-08 2020-12-15 深圳市润腾智慧科技有限公司 Image feature data management method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985232A (en) * 2018-07-18 2018-12-11 平安科技(深圳)有限公司 Facial image comparison method, device, computer equipment and storage medium
CN110298249A (en) * 2019-05-29 2019-10-01 平安科技(深圳)有限公司 Face identification method, device, terminal and storage medium
WO2019192121A1 (en) * 2018-04-04 2019-10-10 平安科技(深圳)有限公司 Dual-channel neural network model training and human face comparison method, and terminal and medium
CN110399799A (en) * 2019-06-26 2019-11-01 北京迈格威科技有限公司 Image recognition and the training method of neural network model, device and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019192121A1 (en) * 2018-04-04 2019-10-10 平安科技(深圳)有限公司 Dual-channel neural network model training and human face comparison method, and terminal and medium
CN108985232A (en) * 2018-07-18 2018-12-11 平安科技(深圳)有限公司 Facial image comparison method, device, computer equipment and storage medium
CN110298249A (en) * 2019-05-29 2019-10-01 平安科技(深圳)有限公司 Face identification method, device, terminal and storage medium
CN110399799A (en) * 2019-06-26 2019-11-01 北京迈格威科技有限公司 Image recognition and the training method of neural network model, device and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李培等: "基于字典统计耦合归一化多重距离的图像检索算法", 《西南师范大学学报(自然科学版)》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084939A (en) * 2020-09-08 2020-12-15 深圳市润腾智慧科技有限公司 Image feature data management method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
US10635890B2 (en) Facial recognition method and apparatus, electronic device, and storage medium
WO2021004112A1 (en) Anomalous face detection method, anomaly identification method, device, apparatus, and medium
CN109325964B (en) Face tracking method and device and terminal
CN105893920B (en) Face living body detection method and device
CN108229297B (en) Face recognition method and device, electronic equipment and computer storage medium
WO2019071664A1 (en) Human face recognition method and apparatus combined with depth information, and storage medium
CN110489951B (en) Risk identification method and device, computer equipment and storage medium
CN109376604B (en) Age identification method and device based on human body posture
CN111144366A (en) Strange face clustering method based on joint face quality assessment
CN110866466A (en) Face recognition method, face recognition device, storage medium and server
CN110069989B (en) Face image processing method and device and computer readable storage medium
CN111159476B (en) Target object searching method and device, computer equipment and storage medium
CN113646806A (en) Image processing apparatus, image processing method, and recording medium storing program
CN111339897A (en) Living body identification method, living body identification device, computer equipment and storage medium
CN111079587B (en) Face recognition method and device, computer equipment and readable storage medium
CN114359553A (en) Signature positioning method and system based on Internet of things and storage medium
CN111985454A (en) Face recognition method, device, equipment and computer readable storage medium
CN114925348A (en) Security verification method and system based on fingerprint identification
KR100847142B1 (en) Preprocessing method for face recognition, face recognition method and apparatus using the same
CN111198963A (en) Target retrieval method and device based on average characteristics and related equipment thereof
CN112861742A (en) Face recognition method and device, electronic equipment and storage medium
CN107944429B (en) Face recognition method and device and mobile terminal used by same
CN115424001A (en) Scene similarity estimation method and device, computer equipment and storage medium
CN114973368A (en) Face recognition method, device, equipment and storage medium based on feature fusion
KR20160042646A (en) Method of Recognizing Faces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200526

RJ01 Rejection of invention patent application after publication