CN110659541A - Image recognition method, device and storage medium - Google Patents

Image recognition method, device and storage medium Download PDF

Info

Publication number
CN110659541A
CN110659541A CN201810700192.5A CN201810700192A CN110659541A CN 110659541 A CN110659541 A CN 110659541A CN 201810700192 A CN201810700192 A CN 201810700192A CN 110659541 A CN110659541 A CN 110659541A
Authority
CN
China
Prior art keywords
image
images
feature vector
target
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810700192.5A
Other languages
Chinese (zh)
Inventor
吴伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201810700192.5A priority Critical patent/CN110659541A/en
Publication of CN110659541A publication Critical patent/CN110659541A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The embodiment of the application discloses an image identification method, an image identification device and a storage medium, wherein the method comprises the following steps: acquiring a characteristic vector of each image in the N images to obtain N characteristic vectors; n images are different images of the same object, and N is an integer greater than 1; synthesizing the N eigenvectors to obtain a target eigenvector; and determining the corresponding images of the N images in the image library according to the target characteristic vector and the characteristic vectors of the images in the image library. By implementing the embodiment of the application, the efficiency and the accuracy of image recognition can be improved.

Description

Image recognition method, device and storage medium
Technical Field
The present application relates to the field of information technology, and in particular, to an image recognition method, an image recognition apparatus, and a storage medium.
Background
In the field of face image recognition, an input face image to be recognized can be compared with a face image in an image library to find a face image matched with the face image to be recognized in the image library.
However, the facial image to be recognized may be different from the facial image of the same person stored in the image library in terms of posture, illumination, age, shielding, wearing clothes, etc., so that the matching accuracy between the facial image of the same person as the facial image to be recognized in the image library and the facial image to be recognized is not high, thereby reducing the accuracy of facial image recognition. When the face recognition is in error, the face image needs to be input again for face image recognition, so that the efficiency of image recognition is reduced.
Disclosure of Invention
Based on this, in order to improve efficiency and accuracy of image recognition, embodiments of the present application provide an image recognition method, an image recognition apparatus, and a storage medium.
In a first aspect, an embodiment of the present application provides an image recognition method, including:
acquiring a characteristic vector of each image in the N images to obtain N characteristic vectors; the N images are different images of the same object, and N is an integer greater than 1;
synthesizing the N eigenvectors to obtain target eigenvectors;
and determining the corresponding images of the N images in the image library according to the target characteristic vector and the characteristic vectors of the images in the image library.
In a second aspect, an embodiment of the present application provides an image recognition apparatus, including:
the acquiring unit is used for acquiring the feature vector of each image in the N images to obtain N feature vectors; the N images are different images of the same object, and N is an integer greater than 1; (ii) a
The synthesis unit is used for synthesizing the N eigenvectors to obtain a target eigenvector;
and the determining unit is used for determining the corresponding images of the N images in the image library according to the target characteristic vector and the characteristic vectors of the images in the image library.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, where the computer program includes program instructions, and the program instructions, when executed by a processor, cause the processor to execute the method of the first aspect.
In the embodiment of the present application, for the same object, a plurality of feature vectors corresponding to a plurality of different images may be synthesized to obtain a target feature vector, where the target feature vector includes comprehensive information of the plurality of different images, and the images corresponding to the plurality of images in the image library may be determined only by comparing the target feature vector with the feature vectors of the images in the image library. Therefore, the searching of the comprehensive information containing a plurality of images can be completed only by comparing the target characteristic vector with the characteristic vectors of the images in the image library once without increasing the image searching time, so that the image identification efficiency can be improved. In addition, the target characteristic vector contains comprehensive information of a plurality of different images, so that the accuracy of image identification can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image recognition method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an image recognition apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of another image recognition apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the field of image recognition, in order to improve the accuracy of image recognition, image recognition can be performed using a plurality of images of the same object. Specifically, for example, in facial image recognition, for each of a plurality of facial images of the same person, a possibly matching facial image may be searched in an image library. And then searching the intersection of the possibly matched face images obtained by searching the plurality of face images in the image library to be used as a face image recognition result.
However, in the above method for performing facial image recognition by using multiple facial images, since each facial image needs to be searched in the image library to obtain a possibly matched facial image, the search time is linearly increased, thereby reducing the efficiency of facial image recognition. In addition, the intersection of the face images which are possibly matched and obtained by searching a plurality of face images in the image library is used as the face image recognition result, the face images which are possibly matched in the image library can be omitted, and therefore the accuracy of face image recognition is reduced.
The embodiment of the application provides an image identification method, an image identification device and a storage medium, which can improve the efficiency and accuracy of image identification.
The image recognition method provided by the embodiment of the application can be used for a face image recognition scene, specifically, face unlocking, face identity authentication and the like. The image identification method provided by the embodiment of the application can also be applied to scenes of image search. It should be understood that the foregoing scenes are merely used for explaining the embodiments of the present application, and should not be construed as limiting, and the image recognition method provided in the embodiments of the present application may also be applied to other scenes, which are not limited in the embodiments of the present application.
The following describes embodiments of the present application in detail.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image recognition method according to an embodiment of the present disclosure, and as shown in fig. 1, the image recognition method includes, but is not limited to, steps S101 to S103.
S101, the image recognition device obtains the feature vector of each image in the N images to obtain N feature vectors.
Wherein, N images are different images of the same object, and N is an integer larger than 1. In the embodiment of the present application, the identified object may be a human face, or may be another object with a fixed form, such as an automobile, a bird, a table, and the like.
In the embodiment of the application, when a single image is identified, the single image may be influenced by the posture, the shielding, the additional object, the brightness and other factors of the object, so that N different images of the same object may be selected to acquire more image information of the object. The N images may include images of the object at different poses, different occlusions, different angles, different additional objects, and different brightnesses.
For example, in the scene of face image recognition, N face images of the same person may be selected. In order to extract more face image feature information of the person, the N face images may include face images of the person from different shooting angles. The N facial images may also contain facial images of the person in different facial expressions. The N face images can also comprise face images of different ornaments and clothes worn by the person. The N face images may also comprise face images of the person at different illumination intensities. In addition, the N face images may also include face images of the person at different ages, and so on. Therefore, the N face images can contain more face feature information of the person, and the accuracy of face image recognition is improved.
For another example, in a scene of face unlocking, multiple face images of a user can be continuously acquired through a camera within a period of time, and N images containing different postures, different shelters, different angles, different additional objects and different brightness are selected to acquire more face feature information.
In the embodiment of the present application, extracting the feature vector of an image may be implemented using a feature extraction model. A feature vector of an image may be used to represent the image, the feature vector of the image containing image features of the image. The feature extraction model can be an operation model based on machine learning, and can also be a traditional image recognition algorithm. The operation model based on machine learning may be, for example, a deep learning algorithm of an artificial neural network, and the conventional image recognition algorithm may be, for example, a Deformable Part Model (DPM) algorithm. The embodiment of the present application does not limit the specific algorithm used for extracting the feature vector of the image.
S102, synthesizing the N characteristic vectors by the image recognition device to obtain a target characteristic vector.
In the embodiment of the present application, the synthesized target feature vector includes all feature information of the N feature vectors.
The method for synthesizing the N feature vectors to obtain the target feature vector may include multiple methods, which are described below.
(1) The N feature vectors may be weighted and the target feature vector may be obtained.
Wherein, the weight of the ith eigenvector is determined by at least one of the following items: shooting angle of the same object in the image k, definition of the image k and brightness of the image k; the ith feature vector is any one of the N feature vectors; and the image k is the image corresponding to the ith feature vector in the N image.
In the synthesis method, the weight w of the ith feature vector is inversely related to the shooting angle of the image k. The shooting angle of the image k therein may be an angle between the shooting direction of the image k and the shooting direction of the front view of the same subject. That is, the larger the shot deviation from the front view of the subject in the image k, the smaller the weight w of the feature vector corresponding to the image k. For example, in a face recognition scene, the more the shooting angle of the face in the image k is deviated (i.e., the larger the shooting angle is), the smaller the weight w of the feature vector corresponding to the image k is. Since the more positive the shooting angle of the object in the image (i.e., the smaller the shooting angle), the more the image contains the feature information of the object, the more deviated the shooting angle of the object, the less the image contains the feature information of the object, the more positive the shooting angle, the more negative the weight of the feature vector of the image, and the more deviated the shooting angle, the smaller the weight of the feature vector of the image, the higher the amount of feature information contained in the synthesized target feature vector can be increased, and the accuracy of image recognition can be improved.
In the synthesis method, the weight w of the ith feature vector may be positively correlated with the definition of the image k. That is, the sharper the image in the image k is, the larger the weight w of the feature vector corresponding to the image k is. The higher the definition of the image is, the more the image contains the feature information of the object, the lower the definition of the object is, the less the image contains the feature information of the object, the weight of the feature vector of the image with the higher definition is increased, and the weight of the feature vector of the image with the lower definition is reduced, so that the feature information content contained in the synthesized target feature vector can be improved, and the accuracy of image recognition can be improved.
In the synthesis method, the weight w of the ith feature vector may be inversely related to the deviation value of the brightness of the image k. The deviation value of the brightness of the image k is a deviation amount between the brightness of the image k and a preset standard shooting brightness. The weight w of the ith feature vector may be a maximum value when the brightness of the image k is at a certain value (standard shooting brightness), and the brightness is decreased when the brightness exceeds the certain value or is lower than the certain value. When the brightness of the image is at the fixed value, the image contains the most characteristic information of the object, and the weight of the characteristic vector of the image with the brightness is set to be the largest, so that the characteristic information quantity contained in the synthesized target characteristic vector can be improved, and the accuracy of image recognition can be improved.
In the embodiment of the present application, the method for determining the weight w of the feature vector of the image according to at least one of the shooting angle of the image k, the sharpness of the image k, and the deviation value of the brightness of the image k may be in a fitting manner or in a mapping table manner. The embodiments of the present application do not limit this. The manner of fitting may be, for example, a linear fit, a least squares fit, or the like. The weight w of the feature vector of the image may also be determined according to an empirical value or other manners, and the specific determination manner of the weight w of the feature vector of the image is not limited in this embodiment.
Wherein, the sum of all weight values of N feature vectors corresponding to N images can be normalized, that is, the sum of all weight values of N feature vectors corresponding to N images can be normalized
Figure BDA0001714541260000051
Wherein l is an integer satisfying that l is more than or equal to 1 and less than or equal to N. w is alAs the weight of the feature vector of the l-th image。
(2) Taking the mean vector of the N feature vectors as a target feature vector
In the embodiment of the application, the mean vector of the N feature vectors obtained by corresponding to the N images can be directly used as the target feature vector. The feature vectors of multiple images of a specific object are all distributed in the neighborhood of a certain point on the surface of the D-dimensional hypersphere, and because the similarity between multiple images of the same object is high, the distance between the feature vectors is small. D is the dimension of the feature vector. The mean vector of the N eigenvectors obtained by corresponding to the N images is equivalent to the neighborhood center of the eigenvector of the object, and can be used for representing the image characteristics of a plurality of images of the object.
It should be understood that the above-mentioned example of the method for synthesizing N feature vectors to obtain the target feature vector is only used to explain the embodiment of the present application, and should not be limited thereto, and other methods for synthesizing N feature vectors to obtain the target feature vector may also be used, which are not limited in the embodiment of the present application.
S103, the image recognition device determines the corresponding images of the N images in the image library according to the target feature vector and the feature vectors of the images in the image library.
In this embodiment of the application, before step S103, the target feature vector may also be normalized to obtain a normalized target feature vector. The target feature vector in step S103 is a normalized target feature vector. The N feature vectors obtained by the N images may be normalized feature vectors. The specific algorithm of normalization in the embodiment of the present application is not limited, and may be, for example, L2 normalization, L1 normalization, or other normalization.
For the feature vector, the feature vectors corresponding to different features often have different dimensions and dimension units, and such a situation affects the result of data analysis. Normalizing the plurality of feature vectors may eliminate the effect of dimension and dimension units between the feature vectors prior to performing the synthesis of the plurality of feature vectors. Specifically, after the feature vectors are subjected to normalization processing, the feature vectors are in the same order of magnitude, comparability among the feature vectors can be enhanced, so that different feature vectors can be utilized to perform comprehensive comparison and evaluation to obtain target feature vectors, utilization rates of a plurality of feature vectors can be improved, accuracy of synthesizing the plurality of feature vectors to obtain the target feature vectors can be improved, and accuracy of image recognition is improved.
In this embodiment of the application, the specific manner of determining, by the image recognition device, the corresponding images of the N images in the image library according to the target feature vector and the feature vectors of the images in the image library may include: calculating the similarity between the target characteristic vector and the characteristic vector of each image in the image library; and determining the corresponding images of the N images in the image library according to the similarity of the target characteristic vector and the characteristic vector of each image in the image library.
Specifically, the embodiment of the present application does not limit the specific algorithm used for calculating the similarity between two feature vectors. For example, cosine distances between the target feature vector and feature vectors of images in the image library may be calculated, the larger the cosine distance between two feature vectors is, the smaller the similarity is indicated, and the smaller the cosine distance is, the larger the similarity is indicated. For another example, the manhattan distance between the target feature vector and the feature vectors of the images in the image library may be calculated, a larger manhattan distance between two feature vectors indicates a smaller similarity, and a smaller manhattan distance indicates a larger similarity.
In the embodiment of the present application, the similarity between the target feature vector and the feature vector of each image in the image library may be calculated. The number of images in the image library is the same as the number of obtained similarities, and may be plural. The resulting plurality of similarities may be ranked. And selecting the image corresponding to one or more feature vectors with the highest similarity ranking of the target feature vectors as the corresponding image of the N images of the same object in the image library. Specifically, whether one image or a plurality of images are selected from the image library may be set according to requirements, which is not limited in the embodiment of the present application.
After the corresponding images of the N images of the same object in the image library are identified, since the relevant information of the images in the image library by the image identification device is known, the subsequent operation can be executed according to the corresponding images in the identified image library. For example, in a scene of face unlocking, after N face images of the same person are identified and input and correspond to images in an image library, it is indicated that the person has an unlocking authority, and then the image identification device executes an unlocking operation. For example, in a scene of face identity authentication, after N input face images of the same person are identified and obtained and correspond to images in an image library, the person identity authentication is passed, and then the image identification device may acquire identity information associated with the corresponding images in the image library, such as a contact phone number, an identity card number, and the like.
In the above-described image recognition method, for the same object, a plurality of feature vectors corresponding to a plurality of different images may be synthesized to obtain a target feature vector including integrated information of the plurality of different images, and the images corresponding to the plurality of images in the image library may be specified only by comparing the target feature vector with the feature vectors of the images in the image library. Therefore, the searching of the comprehensive information containing a plurality of images can be completed only by comparing the target characteristic vector with the characteristic vectors of the images in the image library once without increasing the image searching time, so that the image identification efficiency can be improved. In addition, the target characteristic vector contains comprehensive information of a plurality of different images, so that the accuracy of image identification can be improved.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an image recognition apparatus according to an embodiment of the present disclosure. As shown in fig. 2, the image recognition apparatus includes an acquisition unit 201, a combining unit 202, and a determination unit 203, in which:
an obtaining unit 201, configured to obtain a feature vector of each of the N images, so as to obtain N feature vectors; n images are different images of the same object, and N is an integer greater than 1;
a synthesizing unit 202, configured to synthesize the N feature vectors to obtain target feature vectors;
and the determining unit 203 determines the corresponding images of the N images in the image library according to the target feature vector and the feature vectors of the images in the image library.
In a possible implementation manner, the synthesizing unit 202 is specifically configured to perform weighting on the N feature vectors to obtain a target feature vector;
wherein, the weight of the ith eigenvector is determined by at least one of the following items: the shooting angle of the image k, the definition of the image k and the brightness of the image k; the ith feature vector is any one of the N feature vectors; and the image k is the image corresponding to the ith feature vector in the N image.
In a possible embodiment, the weight of the ith feature vector is inversely related to the shooting angle of the image k, which contains the angle between the shooting direction of the image k and the shooting direction of the front view of the same object; the weight of the ith feature vector is positively correlated with the definition of the image k; the weight of the ith feature vector is inversely related to the deviation value of the brightness of the image k, and the deviation value of the brightness of the image k is the deviation amount between the brightness of the image k and the preset standard shooting brightness.
In a possible implementation, the synthesis unit 202 is specifically configured to use a mean vector of the N feature vectors as the target feature vector.
In a possible implementation manner, the image recognition apparatus further includes a normalization unit 204, configured to normalize the target feature vector to obtain a normalized target feature vector;
the determining unit 203 is specifically configured to determine, according to the normalized target feature vector and the feature vectors of the images in the image library, corresponding images of the N images in the image library.
In a possible implementation manner, the determining unit 203 is specifically configured to calculate a similarity between the target feature vector and a feature vector of each image in the image library; and determining the corresponding images of the N images in the image library according to the similarity of the target characteristic vector and the characteristic vector of each image in the image library.
For specific functions of each unit described in fig. 2, reference may be made to the embodiment described in fig. 1, which is not described herein again.
Referring to fig. 3, fig. 3 is a schematic structural diagram of another image recognition apparatus according to an embodiment of the present disclosure. As shown in fig. 3, the image recognition apparatus in the present embodiment may include: one or more processors 301; one or more input devices 302, one or more output devices 303, and memory 304. The processor 301, the input device 302, the output device 303, and the memory 304 are connected by a bus 305. Wherein:
the memory 304 includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), and the memory 304 is used for storing related instructions and data. The memory 304 may also be used to store an image library 10, which may contain a plurality of images. The memory 304 may also be used to store a feature extraction model 11 for the processor 301 to call to obtain a feature vector for each of the N images.
The processor 301 may be one or more Central Processing Units (CPUs), and in the case that the processor 301 is one CPU, the CPU may be a single-core CPU or a multi-core CPU.
An input device 302 for receiving N images of the same object.
A processor 301 for calling program instructions stored in the memory 304 to perform the following operations:
acquiring a characteristic vector of each image in the N images to obtain N characteristic vectors; n images are different images of the same object, and N is an integer greater than 1;
synthesizing the N eigenvectors to obtain a target eigenvector;
and determining the corresponding images of the N images in the image library according to the target characteristic vector and the characteristic vectors of the images in the image library.
As a possible implementation, the processor 301 is specifically configured to invoke program instructions stored in the memory 304 to perform the following operations:
weighting the N eigenvectors to obtain a target eigenvector;
wherein, the weight of the ith eigenvector is determined by at least one of the following items: the shooting angle of the image k, the definition of the image k and the brightness of the image k; the ith feature vector is any one of the N feature vectors; and the image k is the image corresponding to the ith feature vector in the N image.
As a possible implementation manner, the weight of the ith feature vector is inversely related to the shooting angle of the image k, and the shooting angle of the image k includes an angle between the shooting direction of the image k and the shooting direction of the front view of the same object; the weight of the ith feature vector is positively correlated with the definition of the image k; the weight of the ith feature vector is inversely related to the deviation value of the brightness of the image k, and the deviation value of the brightness of the image k is the deviation amount between the brightness of the image k and the preset standard shooting brightness.
As a possible implementation, the processor 301 is specifically configured to invoke program instructions stored in the memory 304 to perform the following operations: and taking the mean vector of the N feature vectors as a target feature vector.
As a possible implementation, the processor 301 is further configured to invoke program instructions stored in the memory 304 to perform the following operations:
normalizing the target characteristic vector to obtain a normalized target characteristic vector;
the processor 301 is specifically configured to invoke program instructions stored in the memory 304 to perform the following operations:
and determining the corresponding images of the N images in the image library according to the normalized target characteristic vector and the characteristic vectors of the images in the image library.
As a possible implementation, the processor 301 is specifically configured to invoke program instructions stored in the memory 304 to perform the following operations:
calculating the similarity between the target characteristic vector and the characteristic vector of each image in the image library;
and determining the corresponding images of the N images in the image library according to the similarity of the target characteristic vector and the characteristic vector of each image in the image library.
In another embodiment of the present application, a computer-readable storage medium is provided, which stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the implementation described in the image recognition method described in fig. 1.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present application may be substantially or partially contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An image recognition method, comprising:
acquiring a characteristic vector of each image in the N images to obtain N characteristic vectors; the N images are different images of the same object, and N is an integer greater than 1;
synthesizing the N eigenvectors to obtain target eigenvectors;
and determining the corresponding images of the N images in the image library according to the target characteristic vector and the characteristic vectors of the images in the image library.
2. The method of claim 1, wherein the synthesizing the N eigenvectors to obtain a target eigenvector comprises:
weighting the N eigenvectors to obtain a target eigenvector; wherein, the weight of the ith eigenvector is determined by at least one of the following items: the shooting angle of the image k, the definition of the image k and the brightness of the image k; the ith feature vector is any one of the N feature vectors; and the image k is the image corresponding to the ith feature vector in the N image.
3. The method according to claim 2, wherein the weight of the i-th feature vector is inversely related to the shooting angle of the image k, which is the angle between the shooting direction of the image k and the shooting direction of the front view of the same object;
the weight of the ith feature vector is positively correlated with the definition of the image k;
the weight value of the ith feature vector is inversely related to the deviation value of the brightness of the image k, and the deviation value of the brightness of the image k is the deviation value between the brightness of the image k and the preset standard shooting brightness.
4. The method according to any one of claims 1 to 3, wherein after synthesizing the N feature vectors to obtain the target feature vector, the method further comprises:
normalizing the target feature vector to obtain a normalized target feature vector;
determining the images corresponding to the N images in the image library according to the target feature vector and the feature vectors of the images in the image library, wherein the determining comprises the following steps:
and determining the corresponding images of the N images in the image library according to the normalized target characteristic vector and the characteristic vectors of the images in the image library.
5. The method according to any one of claims 1 to 3, wherein the determining the corresponding images of the N images in the image library according to the target feature vector and the feature vectors of the images in the image library comprises:
calculating the similarity between the target characteristic vector and the characteristic vector of each image in the image library;
and determining the corresponding images of the N images in the image library according to the similarity of the target characteristic vector and the characteristic vector of each image in the image library.
6. An image recognition apparatus, comprising:
the acquiring unit is used for acquiring the feature vector of each image in the N images to obtain N feature vectors; the N images are different images of the same object, and N is an integer greater than 1;
the synthesis unit is used for synthesizing the N eigenvectors to obtain a target eigenvector;
and the determining unit is used for determining the corresponding images of the N images in the image library according to the target characteristic vector and the characteristic vectors of the images in the image library.
7. The apparatus according to claim 6, wherein the synthesizing unit is specifically configured to perform weighted sum on the N eigenvectors to obtain a target eigenvector; wherein, the weight of the ith eigenvector is determined by at least one of the following items: the shooting angle of the image k, the definition of the image k and the brightness of the image k; the ith feature vector is any one of the N feature vectors; and the image k is the image corresponding to the ith feature vector in the N image.
8. The apparatus according to claim 7, wherein the weight of the ith eigenvector is inversely related to the shooting angle of the image k, which is the angle between the shooting direction of the image k and the shooting direction of the front view of the same object;
the weight of the ith feature vector is positively correlated with the definition of the image k;
the weight value of the ith feature vector is inversely related to the deviation value of the brightness of the image k, and the deviation value of the brightness of the image k is the deviation value between the brightness of the image k and the preset standard shooting brightness.
9. The apparatus according to any one of claims 6 to 8, further comprising a normalization unit, configured to normalize the target feature vector to obtain a normalized target feature vector;
the determining unit is specifically configured to determine, according to the normalized target feature vector and feature vectors of images in an image library, corresponding images of the N images in the image library.
10. The apparatus according to any one of claims 6 to 8, wherein the determining unit is specifically configured to calculate a similarity between the target feature vector and a feature vector of each image in the image library; and determining the corresponding images of the N images in the image library according to the similarity of the target characteristic vector and the characteristic vector of each image in the image library.
CN201810700192.5A 2018-06-29 2018-06-29 Image recognition method, device and storage medium Pending CN110659541A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810700192.5A CN110659541A (en) 2018-06-29 2018-06-29 Image recognition method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810700192.5A CN110659541A (en) 2018-06-29 2018-06-29 Image recognition method, device and storage medium

Publications (1)

Publication Number Publication Date
CN110659541A true CN110659541A (en) 2020-01-07

Family

ID=69026955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810700192.5A Pending CN110659541A (en) 2018-06-29 2018-06-29 Image recognition method, device and storage medium

Country Status (1)

Country Link
CN (1) CN110659541A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111460419A (en) * 2020-03-31 2020-07-28 周亚琴 Internet of things artificial intelligence face verification method and Internet of things cloud server
CN112132060A (en) * 2020-09-25 2020-12-25 广州市派客朴食信息科技有限责任公司 Method for intelligently identifying and settling food

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426853A (en) * 2015-11-24 2016-03-23 成都四象联创科技有限公司 Human body characteristic identification method based on image
CN105590089A (en) * 2015-10-22 2016-05-18 广州视源电子科技股份有限公司 Face identification method and device
CN105678278A (en) * 2016-02-01 2016-06-15 国家电网公司 Scene recognition method based on single-hidden-layer neural network
CN106372666A (en) * 2016-08-31 2017-02-01 同观科技(深圳)有限公司 Target identification method and device
CN107506738A (en) * 2017-08-30 2017-12-22 深圳云天励飞技术有限公司 Feature extracting method, image-recognizing method, device and electronic equipment
CN107590474A (en) * 2017-09-21 2018-01-16 广东欧珀移动通信有限公司 Solve lock control method and Related product
CN107958244A (en) * 2018-01-12 2018-04-24 成都视观天下科技有限公司 A kind of face identification method and device based on the fusion of video multiframe face characteristic

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105590089A (en) * 2015-10-22 2016-05-18 广州视源电子科技股份有限公司 Face identification method and device
CN105426853A (en) * 2015-11-24 2016-03-23 成都四象联创科技有限公司 Human body characteristic identification method based on image
CN105678278A (en) * 2016-02-01 2016-06-15 国家电网公司 Scene recognition method based on single-hidden-layer neural network
CN106372666A (en) * 2016-08-31 2017-02-01 同观科技(深圳)有限公司 Target identification method and device
CN107506738A (en) * 2017-08-30 2017-12-22 深圳云天励飞技术有限公司 Feature extracting method, image-recognizing method, device and electronic equipment
CN107590474A (en) * 2017-09-21 2018-01-16 广东欧珀移动通信有限公司 Solve lock control method and Related product
CN107958244A (en) * 2018-01-12 2018-04-24 成都视观天下科技有限公司 A kind of face identification method and device based on the fusion of video multiframe face characteristic

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIAOLONG YANG, ET AL.: ""Neural Aggregation Network for Video Face Recognition"", 《IN PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111460419A (en) * 2020-03-31 2020-07-28 周亚琴 Internet of things artificial intelligence face verification method and Internet of things cloud server
CN111460419B (en) * 2020-03-31 2020-11-27 深圳市微网力合信息技术有限公司 Internet of things artificial intelligence face verification method and Internet of things cloud server
CN112132060A (en) * 2020-09-25 2020-12-25 广州市派客朴食信息科技有限责任公司 Method for intelligently identifying and settling food

Similar Documents

Publication Publication Date Title
US10635890B2 (en) Facial recognition method and apparatus, electronic device, and storage medium
CN107330408B (en) Video processing method and device, electronic equipment and storage medium
US20180204094A1 (en) Image recognition method and apparatus
US8811726B2 (en) Method and system for localizing parts of an object in an image for computer vision applications
Shao et al. HPAT indexing for fast object/scene recognition based on local appearance
CN109284675B (en) User identification method, device and equipment
CN110428399B (en) Method, apparatus, device and storage medium for detecting image
CN109684969B (en) Gaze position estimation method, computer device, and storage medium
CN108875487B (en) Training of pedestrian re-recognition network and pedestrian re-recognition based on training
WO2015017439A1 (en) Method and system for searching images
JP2016062253A (en) Object identification apparatus, object identification method, and program
JP7107598B2 (en) Authentication face image candidate determination device, authentication face image candidate determination method, program, and recording medium
AU2019200711A1 (en) Biometric verification
Findling et al. Mobile match-on-card authentication using offline-simplified models with gait and face biometrics
KR101174048B1 (en) Apparatus for recognizing a subject and method using thereof
Houcine et al. Ear recognition based on multi-bags-of-features histogram
CN111626340A (en) Classification method, classification device, terminal and computer storage medium
CN110659541A (en) Image recognition method, device and storage medium
CN112329660A (en) Scene recognition method and device, intelligent equipment and storage medium
Grati et al. Learning local representations for scalable RGB-D face recognition
EP3371739A1 (en) High speed reference point independent database filtering for fingerprint identification
CN111444373B (en) Image retrieval method, device, medium and system thereof
US20060280344A1 (en) Illumination normalizing apparatus, method, and medium and face recognition apparatus, method, and medium using the illumination normalizing apparatus, method, and medium
EP3617993A1 (en) Collation device, collation method and collation program
CN111881789A (en) Skin color identification method and device, computing equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200107