CN111104823A - Face recognition method and device, storage medium and terminal equipment - Google Patents

Face recognition method and device, storage medium and terminal equipment Download PDF

Info

Publication number
CN111104823A
CN111104823A CN201811253432.8A CN201811253432A CN111104823A CN 111104823 A CN111104823 A CN 111104823A CN 201811253432 A CN201811253432 A CN 201811253432A CN 111104823 A CN111104823 A CN 111104823A
Authority
CN
China
Prior art keywords
user
image
similar
feature vector
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811253432.8A
Other languages
Chinese (zh)
Inventor
陈杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201811253432.8A priority Critical patent/CN111104823A/en
Publication of CN111104823A publication Critical patent/CN111104823A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The application provides a face recognition method, which comprises the following steps: acquiring a target image comprising face information of a current user, and extracting a target characteristic vector in the target image; correcting the target feature vector according to a preset feature correction algorithm to obtain a feature vector to be compared; and determining a similar face image based on the feature vector to be compared. By the method, the facial image similar to the user can be obtained, the facial contour and the relative position information of the facial contour in the facial image are extracted, so that the obtained image is more similar to the facial image of the user, the facial contour and the relative position of the facial contour in the facial image of the user are corrected, the similar facial image of the user is obtained based on the corrected image, the image obtained by the method is more attractive, and the confidence of the user on the image is further improved. By the method, the friend circle of the user can be further expanded, and the interest of the user on the recommended user is improved.

Description

Face recognition method and device, storage medium and terminal equipment
Technical Field
The present application relates to the field of software and image processing technologies, and in particular, to a face recognition method, an apparatus, a storage medium, and a terminal device.
Background
With the rapid development of the mobile internet, the working and living modes of people are deeply influenced and changed by emerging social media platforms such as WeChat and microblog, the trends of rapid increase of user quantity and massive generation of user data are presented, and effective data sources and application scenes are provided for relevant researches.
At present, people generally like to upload own photos to an application program and make up the own appearance in an image through a preset makeup program in the application program, however, in the conventional situation, only the face of a user is extracted and directly pasted to other makeup fixed positions, and the image obtained by the method is the same as the face of the user, but is not attractive, so that the confidence of the user on the own appearance is reduced. In addition, at present, people share own photos with friends through social media software, so the photos of the people are frequently sun-printed in a friend circle, however, in a conventional situation, some people do not notice that the people and the friends in the friend circle have certain similarity. And conventionally, the similarity between friends can only be based on individual subjective feelings. Meanwhile, at present, friends of people often come from people in a work circle, people outside the work circle are difficult to contact, the existing social contact mode of people is limited, meanwhile, the conventional other friend recommendation method is to recommend based on personal information of users directly, but the method also has certain defects, namely, the interest of the users to the recommended users cannot be aroused only through introduction of the personal information of the users, and the social contact of the users is also limited.
Disclosure of Invention
In order to solve the problems, the application provides a face recognition method, a face recognition device, a storage medium and a terminal device, the facial image similar to the user is obtained based on the corrected image by correcting the outline and the relative position of the facial features of the user in the image, the image obtained by the method is more attractive, and the confidence of the user on the face recognition device is further improved.
In order to achieve the above object, the following technical solutions are adopted in the present application:
the face recognition method comprises the following steps
Acquiring a target image comprising face information of a current user, and extracting a target characteristic vector in the target image;
correcting the target feature vector according to a preset feature correction algorithm to obtain a feature vector to be compared;
and determining a similar face image based on the feature vector to be compared.
Optionally, after determining a similar face image based on the feature vectors to be compared, the method includes:
acquiring user information associated with the similar face image according to the similar face image;
and sending the similar face image and the user information to a user in an association relationship.
Optionally, the obtaining, according to the similar face image, user information associated with the similar face image includes:
acquiring a face image to be determined in a database, and determining the similarity between the face image to be determined and the similar face image based on an SVM (support vector machine) classifier;
determining the face image to be determined with the similarity within a preset similarity threshold as a similar user image;
the user information associated with the similar user image is obtained.
Optionally, the modifying the target feature vector according to a preset feature modification algorithm includes:
and correcting the target feature vector based on a convolutional neural network.
Optionally, the modifying the target feature vector according to a preset feature modification algorithm to obtain a feature vector to be compared includes:
acquiring a sample image of a preset age span of a sample user based on a block chain technology;
training the convolutional neural network by using the sample image of the same user, and determining the target feature vector correction coefficient;
and correcting the target characteristic vector according to the target characteristic vector correction coefficient to obtain the characteristic vector to be compared in the preset age span of the user.
Optionally, the determining a similar face image based on the feature vectors to be compared includes:
and generating the user image with the preset age span according to the feature vector to be compared, and taking the user image as the similar face image.
Preferably, the target feature vector comprises the outline shape of the five sense organs of the user and the corresponding position thereof.
The embodiment of the present application further provides a face recognition apparatus, including:
the target characteristic vector acquisition module is used for acquiring a target image comprising face information of a current user and extracting a target characteristic vector in the target image;
the to-be-compared feature vector obtaining module is used for correcting the target feature vector according to a preset feature correction algorithm to obtain a to-be-compared feature vector;
and the similar face image determining module is used for determining a similar face image based on the feature vector to be compared.
Optionally, the method further comprises:
the user information acquisition module is used for acquiring user information associated with the similar face image according to the similar face image;
and the sending module is used for sending the similar face image and the user information to a user in an association relationship.
Optionally, the user information obtaining module includes:
the similarity determining unit is used for acquiring a face image to be determined in a database and determining the similarity between the face image to be determined and the similar face image based on an SVM classifier;
the similar user image determining unit is used for determining the face image to be determined with the similarity within a preset similarity threshold as a similar user image;
a sending unit, configured to acquire the user information associated with the similar user image.
Optionally, the to-be-aligned feature vector obtaining module includes:
and the correction unit is used for correcting the target characteristic vector based on the convolutional neural network.
Optionally, the to-be-aligned feature vector obtaining module includes:
the system comprises a sample image acquisition unit, a block chain technology acquisition unit and a block chain analysis unit, wherein the sample image acquisition unit is used for acquiring a sample image of a preset age span of a sample user based on the block chain technology;
a correction coefficient determining unit, configured to train the convolutional neural network using the sample images of the same user, and determine the target feature vector correction coefficient;
and the to-be-compared feature vector obtaining unit is used for correcting the target feature vector according to the target feature vector correction coefficient to obtain the to-be-compared feature vector within the preset age span of the user.
Optionally, the similar face image determination module includes:
and the similar face image determining unit is used for generating the user image with the preset age span according to the feature vector to be compared and taking the user image as the similar face image.
Preferably, the target feature vector comprises the outline shape of the five sense organs of the user and the corresponding position thereof.
An embodiment of the present application further provides a computer readable storage medium, where a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the method for face recognition according to any technical solution is implemented.
An embodiment of the present application further provides a terminal device, including:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the steps of the face recognition method according to any aspect.
Compared with the prior art, the method has the following beneficial effects:
1. the application provides a face recognition method, which comprises the following steps: acquiring a target image comprising face information of a current user, and extracting a target characteristic vector in the target image; correcting the target feature vector according to a preset feature correction algorithm to obtain a feature vector to be compared; and determining a similar face image based on the feature vector to be compared. By the method, the facial image similar to the user can be obtained, the facial contour and the relative position information of the facial contour in the facial image are extracted, so that the obtained image is more similar to the facial image of the user, the facial contour and the relative position of the facial contour in the facial image are corrected, the similar facial image of the user is obtained based on the corrected image, the facial image obtained by the method is more attractive, the experience and satisfaction of the user on the facial image are improved, and the confidence of the user on the facial image is further improved.
2. According to the face recognition method, the target characteristic information is corrected through the convolutional neural network, so that the obtained similar face image is more similar to the face image of the user, and the obtained similar face image is more attractive. Based on the foregoing process, user information associated with the similar face image is obtained from the similar face image. By the method, the interest of the user in the recommended user can be further stimulated, and the current friend circle is expanded by help.
3. The application provides a face recognition method, which corrects the target feature vector according to a preset feature correction algorithm to obtain a feature vector to be compared, and comprises the following steps: acquiring a sample image of a preset age span of a sample user based on a block chain technology; training the convolutional neural network by using the sample image of the same user, and determining the target feature vector correction coefficient; and correcting the target characteristic vector according to the target characteristic vector correction coefficient to obtain the characteristic vector to be compared in the preset age span of the user. The block chain technology can be used for obtaining the face image of the sample user to train the convolutional neural network, the data volume of the method is enlarged through the block chain technology, the correction coefficient obtained through the convolutional neural network is more accurate, and the obtained face image is more similar to the face image of the user.
Drawings
Fig. 1 is a flowchart of a face recognition method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a face recognition apparatus according to another embodiment of the present application;
fig. 3 is a schematic diagram of a basic structure of a terminal device according to another embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
According to the face recognition method, the user similar to the five sense organs of the user is searched by extracting the outline shape and the corresponding position characteristics of the five sense organs of the user, the image of the similar user is found to have a certain difference with the user through the method, the interest of the user to the similar user is further improved, and due to the fact that similar users usually have similar interest and hobbies, the social network of the user can be expanded through the face recognition method, and the user can be helped to search friends with the same topic.
A face recognition method disclosed in the following embodiments, as shown in fig. 1, includes:
s100: acquiring a target image comprising face information of a current user, and extracting a target characteristic vector in the target image;
s200: correcting the target feature vector according to a preset feature correction algorithm to obtain a feature vector to be compared;
s300: and determining a similar face image based on the feature vector to be compared.
The target image is mainly obtained by uploading of a user, wherein the target image comprises face information so as to extract a target characteristic vector of a face, and the vector mainly comprises the outline shape of the five sense organs of the user and the corresponding position of the five sense organs. In other embodiments, an image submitted by a user in a social application program can be directly acquired, a target image of the user is determined through description of the image, and after the target image of the user is acquired, a target feature vector in the target image is extracted. Depending on the number of extracted target features, the feature vector may be a multi-dimensional vector, for example, a 400-dimensional vector, and the target feature vector mainly includes the outline shape of the facial features of the user and the corresponding positions. In the method, the target characteristic vector of the user can be extracted through the convolutional neural network, and after the target characteristic vector is extracted, the target characteristic vector is corrected based on a preset characteristic correction algorithm in order to conveniently and rapidly find a face image similar to the face of the user. Specifically, for example, the contour shape and the corresponding position of part or all of the facial features of the user are finely adjusted to obtain feature vectors to be compared which are similar to the contour shape and the corresponding position of the facial features of the user, and then similar face images are generated through the feature vectors to be compared, so that the user can predict images which are similar to the user from certain angles, for example, images of different age groups. Or the face image with the same or similar characteristics to be compared can be searched through the image, then the user which is different from the user and is similar at certain angles can be searched, the corresponding user information can be obtained through the similar face image, and the user can make friends through the mode, search friends similar to own interests and hobbies, and enlarge own circle of contact.
Example one
Optionally, after determining a similar face image based on the feature vectors to be compared, the method includes:
acquiring user information associated with the similar face image according to the similar face image;
and sending the similar face image and the user information to a user in an association relationship.
In an embodiment of the present application, in combination with the foregoing description, since people with similar faces generally have similar interests, when it is determined that the database has user images with target feature vectors similar to the feature vectors to be compared through comparison of the feature vectors to be compared, in order to help a user find a user similar to but not identical to the target feature vectors, the social circle of the user can be expanded, and the user can be helped find a user similar to or identical to the interests of the user. Conventionally, the face image in the database is usually associated with user information, and therefore, the user information associated with the similar face image can be acquired from the similar face image in the database. And the user information and the similar face image are sent to the user in an incidence relation, the interest of the user to other users can be improved through the image, and the opportunity of contact between the current user and other users in the database is increased. Further, in one embodiment, the images of other users in the database may be sorted by the similarity of the feature vector to be compared with the target feature vector of the images of other users in the database, for example, the images of users are sorted by a descending rule of the similarity, and then the highly similar user information is sent to the current user, so as to improve the interest of the user in other users. In one embodiment, the distance between the feature vectors can be converted into the similarity, and then the user with certain similarity to the current user is determined.
For example: and when Dx < ═ Dmin, taking S < ═ Smax, wherein Dx is the distance between the feature vector to be compared and the target feature vector in the face image of other users in the database, Dmin is a preset minimum distance, S is the similarity score between the feature vector of the face image in the database and the feature vector to be compared, and Smax is a preset maximum similarity score.
When Di < Dx < ═ D (i + 1)), S is equal to Si + K (Dx-Di), wherein K is equal to (S (i +1) -Si)/(D (i +1) -Di)), Dx is the distance between the feature vector to be compared and the target feature vector in the face image of other users in the database between the similar face images, Di is the distance between the feature vector of the first face image in the database and the feature vector to be compared, Di +1 is the distance between the feature vector of the second face image in the database and the feature vector to be compared, Si is the similarity score between the feature vector of the first face image in the database and the feature vector to be compared, and S (i +1) is the similarity score between the feature vector of the second face image in the database and the feature vector to be compared.
And when Dx is larger than Dmax, taking S as Smin, wherein Dx is the distance between the feature vector to be compared and the target feature vector in the face image of other users in the database, Dmax is a preset maximum distance, S is the similarity score between the feature vector of the face image in the database and the feature vector to be compared, and Smin is a preset minimum similarity score. Through the calculation method, the user image corresponding to the target feature vector within the preset similarity threshold is determined as the similar face image.
Optionally, the obtaining, according to the similar face image, user information associated with the similar face image includes:
acquiring a face image to be determined in a database, and determining the similarity between the face image to be determined and the similar face image based on an SVM (support vector machine) classifier;
determining the face image to be determined with the similarity within a preset similarity threshold as a similar user image;
the user information associated with the similar user image is obtained.
In combination with the foregoing, in one embodiment, when the face recognition method of the present application is implemented based on a certain social application program, after the feature vector to be compared is determined, other user images in the database are obtained, the target feature vector in the user image is extracted, then the target feature vector is matched with the feature vector to be compared, and the similarity between the two is calculated, and further when the user images are sufficiently numerous, a face image that is consistent with or similar to the feature vector to be compared can be found, so that a face image that is similar to but not consistent with the user can be found by the method, and the specific similarity can be determined by calculating the distance between the target feature vector of the other user and the feature vector to be compared, that is, the distance is converted into the similarity of the feature vector. The similarity of the feature vectors determined by the SVM classifier is s (x), wherein,
Figure BDA0001842223240000091
wherein x is a decision value output by the trained two-class SVM classifier.
Combining training of an SVM classifier, obtaining similarity S (x) between a target feature vector and a feature vector to be compared in each face image to be determined, determining the face image to be determined with the similarity within a preset phase velocity threshold as a user image, for example, determining the preset similarity threshold as M-N, wherein M < N and M >0, determining S (x) to be determined which belongs to the range of M-N as the user image, and eliminating the image to be determined which is S (x) smaller than M or S (x) larger than N. Because the user image is from the database and is correlated with the user information, the user image and the user information are sent to the user in a correlation relationship, for example, a business card form is pushed to the user, and the business card is provided with the user image, so that when the user sees the user image, based on the similarity between the current user and the user image, the interest of the current user to the user can be improved, the user is further helped to search friends, the circle of friends of the user is expanded, and based on the similarity of the user appearance, the characters and interests of the user also have certain similarity, and the user is further helped to find friends which are similar to the interests and have similar characters.
Optionally, the modifying the target feature vector according to a preset feature modification algorithm includes:
and correcting the target feature vector based on a convolutional neural network.
In the embodiments of the present application, the target feature vector is modified through a convolutional neural network, where the modification method may modify the target feature based on the features of the sample users, for example, obtain a plurality of images in the database that are most favored by other users, extract the target feature vector in each image, train the convolutional neural network based on the target feature vector to obtain the modification coefficient of the target feature vector, specifically, since the image target feature vectors of each sample user are not completely the same, and similar to that, there is a certain correlation between the contours, positions, and facial shapes of different facial features, through the training of the convolutional neural network, the classification may be performed based on the contours, positions, and facial shapes of different facial features, and then after the convolutional neural network is trained by the user image data, the contours, positions, and facial shapes of different facial features are determined first, The type of the target characteristic vector is determined according to the relation between the position and the face type, the correction coefficient based on the type is corrected through the convolutional neural network, and then the target characteristic vector which is similar to the current user and is not completely consistent can be obtained, and further other users which are similar to the current user and are not completely the same in the database can be conveniently searched.
Example two
Optionally, the modifying the target feature vector according to a preset feature modification algorithm to obtain a feature vector to be compared includes:
acquiring a sample image of a preset age span of a sample user based on a block chain technology;
training the convolutional neural network by using the sample image of the same user, and determining the target feature vector correction coefficient;
and correcting the target characteristic vector according to the target characteristic vector correction coefficient to obtain the characteristic vector to be compared in the preset age span of the user.
With the correction method of the first embodiment, in the application, based on the application program applying the method, an application program predicting a future mask of the user may be provided for the user, or the user may be helped to search for other users similar to the current user in a certain age group, so that the user may plan the current life condition of the user according to the situation, and the user may adjust his/her mind to correctly face a future face, or generate a kind of homocentric to a certain age group according to the face, and guide the user with a positive upward and pleasant mind attitude. In the application, a sample image of a preset age span of a sample user can be obtained based on a block chain technology, the sample user is the information disclosed by the user and has the sample image of the user at a certain age span, and the sample image contains a target feature vector of the sample user. The target feature vector correction coefficient is determined by combining the training method of the convolutional neural network in the first embodiment, and the target feature vector is corrected by the convolutional neural network based on the correction coefficient to obtain the feature vector to be compared of the current user within the preset age span.
Optionally, the determining a similar face image based on the feature vectors to be compared includes:
and generating the user image with the preset age span according to the feature vector to be compared, and taking the user image as the similar face image.
Combining the feature vectors to be compared, generating a user image of a preset age span of the user, for example, when the current age of the user is 25 years old, obtaining a target feature vector of the user in about 55 years old by the method, namely, adjusting the outline of the five sense organs of the user to be more downward vertical by a correction coefficient, so that the outline of the five sense organs of the user is more in line with the state of about 55 years old, and then generating a similar face image of the user based on the adjusted outline of the five sense organs and the position of the outline of the five sense organs. Further, a user similar to the feature vector to be compared can be obtained based on the feature vector to be compared, and the user information and the similar face image are sent to the current user, so that the interest of the current user on the similar user can be conveniently extracted. In other real-time manners, in combination with the foregoing examples and the first embodiment, the target feature vector feature of the user may be adjusted to be the feature vector to be compared of the user about 7 years old through the correction coefficient and the convolutional neural network, and a face image which is about 7 years old and similar to the feature vector to be compared in the database is obtained through the feature vector to be compared, where the age of the similar face image in the database may not be 7, the similar face image may be the user at any age stage, and the user information and the similar face image are sent to the current user, so that the interest of the current user on the similar user is facilitated, and further, the circle of friends of the user is expanded.
The embodiment of the present application further provides a face recognition apparatus, in one implementation, the face recognition apparatus includes:
a target feature vector obtaining module 100, configured to obtain a target image including face information of a current user, and extract a target feature vector in the target image;
a to-be-compared feature vector obtaining module 200, configured to correct the target feature vector according to a preset feature correction algorithm, so as to obtain a to-be-compared feature vector;
a similar face image determining module 300, configured to determine a similar face image based on the feature vector to be compared.
The target image is mainly obtained by uploading of a user, wherein the target image comprises face information so as to extract a target characteristic vector of a face, and the vector mainly comprises the outline shape of the five sense organs of the user and the corresponding position of the five sense organs. In other embodiments, an image submitted by a user in a social application program can be directly acquired, a target image of the user is determined through description of the image, and after the target image of the user is acquired, a target feature vector in the target image is extracted. Depending on the number of extracted target features, the feature vector may be a multi-dimensional vector, for example, a 400-dimensional vector, and the target feature vector mainly includes the outline shape of the facial features of the user and the corresponding positions. In the method, the target characteristic vector of the user can be extracted through the convolutional neural network, and after the target characteristic vector is extracted, the target characteristic vector is corrected based on a preset characteristic correction algorithm in order to conveniently and rapidly find a face image similar to the face of the user. Specifically, for example, the contour shape and the corresponding position of part or all of the facial features of the user are finely adjusted to obtain feature vectors to be compared which are similar to the contour shape and the corresponding position of the facial features of the user, and then similar face images are generated through the feature vectors to be compared, so that the user can predict images which are similar to the user from certain angles, for example, images of different age groups. Or the face image with the same or similar characteristics to be compared can be searched through the image, then the user which is different from the user and is similar at certain angles can be searched, the corresponding user information can be obtained through the similar face image, and the user can make friends through the mode, search friends similar to own interests and hobbies, and enlarge own circle of contact.
With reference to the foregoing method, the present application further provides a face recognition apparatus corresponding to the first embodiment.
Optionally, the method further comprises:
the user information acquisition module is used for acquiring user information associated with the similar face image according to the similar face image;
and the sending module is used for sending the similar face image and the user information to a user in an association relationship.
In an embodiment of the present application, in combination with the foregoing description, since people with similar faces generally have similar interests, when it is determined that the database has user images with target feature vectors similar to the feature vectors to be compared through comparison of the feature vectors to be compared, in order to help a user find a user similar to but not identical to the target feature vectors, the social circle of the user can be expanded, and the user can be helped find a user similar to or identical to the interests of the user. Conventionally, the face image in the database is usually associated with user information, and therefore, the user information associated with the similar face image can be acquired from the similar face image in the database. And the user information and the similar face image are sent to the user in an incidence relation, the interest of the user to other users can be improved through the image, and the opportunity of contact between the current user and other users in the database is increased. Further, in one embodiment, the images of other users in the database may be sorted by the similarity of the feature vector to be compared with the target feature vector of the images of other users in the database, for example, the images of users are sorted by a descending rule of the similarity, and then the highly similar user information is sent to the current user, so as to improve the interest of the user in other users. In one embodiment, the distance between the feature vectors can be converted into the similarity, and then the user with certain similarity to the current user is determined.
For example: and when Dx < ═ Dmin, taking S < ═ Smax, wherein Dx is the distance between the feature vector to be compared and the target feature vector in the face image of other users in the database, Dmin is a preset minimum distance, S is the similarity score between the feature vector of the face image in the database and the feature vector to be compared, and Smax is a preset maximum similarity score.
When Di < Dx < ═ D (i + 1)), S is equal to Si + K (Dx-Di), wherein K is equal to (S (i +1) -Si)/(D (i +1) -Di)), Dx is the distance between the feature vector to be compared and the target feature vector in the face image of other users in the database between the similar face images, Di is the distance between the feature vector of the first face image in the database and the feature vector to be compared, Di +1 is the distance between the feature vector of the second face image in the database and the feature vector to be compared, Si is the similarity score between the feature vector of the first face image in the database and the feature vector to be compared, and S (i +1) is the similarity score between the feature vector of the second face image in the database and the feature vector to be compared.
And when Dx is larger than Dmax, taking S as Smin, wherein Dx is the distance between the feature vector to be compared and the target feature vector in the face image of other users in the database, Dmax is a preset maximum distance, S is the similarity score between the feature vector of the face image in the database and the feature vector to be compared, and Smin is a preset minimum similarity score. Through the calculation method, the user image corresponding to the target feature vector within the preset similarity threshold is determined as the similar face image.
Optionally, the user information obtaining module includes:
the similarity determining unit is used for acquiring a face image to be determined in a database and determining the similarity between the face image to be determined and the similar face image based on an SVM classifier;
the similar user image determining unit is used for determining the face image to be determined with the similarity within a preset similarity threshold as a similar user image;
a sending unit, configured to acquire the user information associated with the similar user image.
In combination with the foregoing, in one embodiment, when the face recognition of the present application is implemented based on a certain social application program, after the feature vector to be compared is determined, other user images in the database are obtained, the target feature vector in the user image is extracted, then the target feature vector is matched with the feature vector to be compared, and the similarity between the two is calculated, and further when the user images are sufficient, a face image that is consistent with or similar to the feature vector to be compared can be found, so that a face image that is similar to but not consistent with the user can be found by using the method, and the specific similarity can be determined by calculating the distance between the target feature vector of the other user and the feature vector to be compared, that is, the distance is converted into the similarity of the feature vector. The similarity of the feature vectors determined by the SVM classifier is s (x), wherein,
Figure BDA0001842223240000141
wherein x is a decision value output by the trained two-class SVM classifier.
Combining training of an SVM classifier, obtaining similarity S (x) between a target feature vector and a feature vector to be compared in each face image to be determined, determining the face image to be determined with the similarity within a preset phase velocity threshold as a user image, for example, determining the preset similarity threshold as M-N, wherein M < N and M >0, determining S (x) to be determined which belongs to the range of M-N as the user image, and eliminating the image to be determined which is S (x) smaller than M or S (x) larger than N. Because the user image is from the database and is correlated with the user information, the user image and the user information are sent to the user in a correlation relationship, for example, a business card form is pushed to the user, and the business card is provided with the user image, so that when the user sees the user image, based on the similarity between the current user and the user image, the interest of the current user to the user can be improved, the user is further helped to search friends, the circle of friends of the user is expanded, and based on the similarity of the user appearance, the characters and interests of the user also have certain similarity, and the user is further helped to find friends which are similar to the interests and have similar characters.
Optionally, the to-be-aligned feature vector obtaining module includes:
and the correction unit is used for correcting the target characteristic vector based on the convolutional neural network.
In the embodiments of the present application, the target feature vector is modified through a convolutional neural network, where the modification method may modify the target feature based on the features of the sample users, for example, obtain a plurality of images in the database that are most favored by other users, extract the target feature vector in each image, train the convolutional neural network based on the target feature vector to obtain the modification coefficient of the target feature vector, specifically, since the image target feature vectors of each sample user are not completely the same, and similar to that, there is a certain correlation between the contours, positions, and facial shapes of different facial features, through the training of the convolutional neural network, the classification may be performed based on the contours, positions, and facial shapes of different facial features, and then after the convolutional neural network is trained by the user image data, the contours, positions, and facial shapes of different facial features are determined first, The type of the target characteristic vector is determined according to the relation between the position and the face type, the correction coefficient based on the type is corrected through the convolutional neural network, and then the target characteristic vector which is similar to the current user and is not completely consistent can be obtained, and further other users which are similar to the current user and are not completely the same in the database can be conveniently searched.
By combining the foregoing method, the present application further provides a face recognition apparatus corresponding to the second embodiment.
Optionally, the to-be-aligned feature vector obtaining module includes:
the system comprises a sample image acquisition unit, a block chain technology acquisition unit and a block chain analysis unit, wherein the sample image acquisition unit is used for acquiring a sample image of a preset age span of a sample user based on the block chain technology;
a correction coefficient determining unit, configured to train the convolutional neural network using the sample images of the same user, and determine the target feature vector correction coefficient;
and the to-be-compared feature vector obtaining unit is used for correcting the target feature vector according to the target feature vector correction coefficient to obtain the to-be-compared feature vector within the preset age span of the user.
With the correction method of the first embodiment, in the application, based on the application program applying the method, an application program predicting a future mask of the user may be provided for the user, or the user may be helped to search for other users similar to the current user in a certain age group, so that the user may plan the current life condition of the user according to the situation, and the user may adjust his/her mind to correctly face a future face, or generate a kind of homocentric to a certain age group according to the face, and guide the user with a positive upward and pleasant mind attitude. In the application, a sample image of a preset age span of a sample user can be obtained based on a block chain technology, the sample user is the information disclosed by the user and has the sample image of the user at a certain age span, and the sample image contains a target feature vector of the sample user. The target feature vector correction coefficient is determined by combining the training method of the convolutional neural network in the first embodiment, and the target feature vector is corrected by the convolutional neural network based on the correction coefficient to obtain the feature vector to be compared of the current user within the preset age span.
Optionally, the similar face image determination module includes:
and the similar face image determining unit is used for generating the user image with the preset age span according to the feature vector to be compared and taking the user image as the similar face image.
Combining the feature vectors to be compared, generating a user image of a preset age span of the user, for example, when the current age of the user is 25 years old, obtaining a target feature vector of the user in about 55 years old by the method, namely, adjusting the outline of the five sense organs of the user to be more downward vertical by a correction coefficient, so that the outline of the five sense organs of the user is more in line with the state of about 55 years old, and then generating a similar face image of the user based on the adjusted outline of the five sense organs and the position of the outline of the five sense organs. Further, a user similar to the feature vector to be compared can be obtained based on the feature vector to be compared, and the user information and the similar face image are sent to the current user, so that the interest of the current user on the similar user can be conveniently extracted. In other real-time manners, in combination with the foregoing examples and the first embodiment, the target feature vector feature of the user may be adjusted to be the feature vector to be compared of the user about 7 years old through the correction coefficient and the convolutional neural network, and a face image which is about 7 years old and similar to the feature vector to be compared in the database is obtained through the feature vector to be compared, where the age of the similar face image in the database may not be 7, the similar face image may be the user at any age stage, and the user information and the similar face image are sent to the current user, so that the interest of the current user on the similar user is facilitated, and further, the circle of friends of the user is expanded.
The invention further provides a terminal device, as shown in fig. 3, for convenience of description, only the part related to the embodiment of the invention is shown, and details of the specific technology are not disclosed, please refer to the method part of the embodiment of the invention. The terminal may be any terminal device including a desktop computer, a tablet computer, a PDA (Personal Digital Assistant), a mobile phone, a POS (Point of Sales), a vehicle-mounted computer, and the like, taking the terminal as the mobile phone as an example:
fig. 3 is a block diagram illustrating a partial structure of a mobile phone related to a terminal according to an embodiment of the present invention. Referring to fig. 3, the cellular phone includes: radio Frequency (RF) circuitry 1510, a memory 1520, an input unit 1530, a display unit 1540, a sensor 1550, an audio circuit 1560, a wireless fidelity (Wi-Fi) module 1570, a processor 1580, and a power supply 1590. Those skilled in the art will appreciate that the handset configuration shown in fig. 3 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 3:
the RF circuit 1510 may be configured to receive and transmit signals during information transmission and reception or during a call, and in particular, receive downlink information of a base station and then process the received downlink information to the processor 1580; in addition, the data of the design uplink is transmitted to the base station. In general, RF circuit 1510 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuit 1510 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The memory 1520 may be used to store software programs and modules, and the processor 1580 executes various functional applications and data processing of the cellular phone by operating the software programs and modules stored in the memory 1520. The memory 1520 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a voiceprint playback function, an image playback function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1520 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 1530 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 1530 may include a touch panel 1531 and other input devices 1532. The touch panel 1531, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on or near the touch panel 1531 using any suitable object or accessory such as a finger or a stylus) and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 1531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects signals brought by touch operation and transmits the signals to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1580, and can receive and execute commands sent by the processor 1580. In addition, the touch panel 1531 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 1530 may include other input devices 1532 in addition to the touch panel 1531. In particular, other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1540 may be used to display information entered by the user or information provided to the user, as well as application interfaces. The display unit 1540 may include a display panel 1541, and optionally, the display panel 1541 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1531 can cover the display panel 1541, and when the touch panel 1531 detects a touch operation on or near the touch panel 1531, the touch panel can transmit the touch operation to the processor 1580 to determine the type of the touch event, and then the processor 1580 can provide a corresponding visual output on the display panel 1541 according to the type of the touch event. Although in fig. 3, the touch panel 1531 and the display panel 1541 are two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1531 and the display panel 1541 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 1550, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 1541 according to the brightness of ambient light and a proximity sensor that turns off the display panel 1541 and/or the backlight when the mobile phone is moved to the ear. As one type of motion sensor, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 1560, speaker 1561, and microphone 1562 may provide an audio interface between a user and a cell phone. The audio circuit 1560 may transmit the electrical signal converted from the received audio data to the speaker 1561, and convert the electrical signal into a voiceprint signal by the speaker 1561 and output the voiceprint signal; on the other hand, the microphone 1562 converts the collected voiceprint signals to electrical signals, which are received by the audio circuit 1560 and converted to audio data, which are processed by the audio data output processor 1580 and passed through the RF circuit 1510 for transmission to, for example, another cell phone, or for output to the memory 1520 for further processing.
Wi-Fi belongs to short-distance wireless transmission technology, and a mobile phone can help a user to receive and send emails, browse webpages, access streaming media and the like through a Wi-Fi module 1570, and provides wireless broadband internet access for the user. Although fig. 3 shows a Wi-Fi module 1570, it is understood that it does not belong to the essential components of the handset and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1580 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1520 and calling data stored in the memory 1520, thereby integrally monitoring the mobile phone. Optionally, the processor 1580 may include one or more processing units; preferably, the processor 1580 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, and the like, and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into the processor 1580.
The handset also includes a power supply 1590 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 1580 via a power management system to facilitate management of charging, discharging, and power consumption management functions via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In this embodiment of the present invention, the processor 1580 included in the terminal device further has the following functions: acquiring a target image comprising face information of a current user, and extracting a target characteristic vector in the target image; correcting the target feature vector according to a preset feature correction algorithm to obtain a feature vector to be compared; and determining a similar face image based on the feature vector to be compared. That is, the processor 1580 has a function of executing the face recognition method according to any of the embodiments, which is not described herein again.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the face recognition method.
Those skilled in the art will appreciate that the invention includes apparatus relating to performing one or more of the operations described in the invention. These devices may be specially designed and manufactured for the required purposes, or they may comprise known devices in general-purpose computers. These devices have stored therein computer programs that are selectively activated or reconfigured. Such computer programs may be stored in a device (e.g., computer) readable medium, including, but not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magnetic-optical disks, ROMs (Read-Only memories), RAMs (Random Access memories), EPROMs (erasable Programmable Read-Only memories), EPROMs (electrically erasable Programmable Read-Only memories), flash memories, magnetic cards, or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a bus. That is, a readable medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or in turns with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A face recognition method, comprising:
acquiring a target image comprising face information of a current user, and extracting a target characteristic vector in the target image;
correcting the target feature vector according to a preset feature correction algorithm to obtain a feature vector to be compared;
and determining a similar face image based on the feature vector to be compared.
2. The method according to claim 1, wherein after determining the similar face images based on the feature vectors to be compared, the method comprises:
acquiring user information associated with the similar face image according to the similar face image;
and sending the similar face image and the user information to a user in an association relationship.
3. The method according to claim 2, wherein the obtaining user information associated with the similar face image according to the similar face image comprises:
acquiring a face image to be determined in a database, and determining the similarity between the face image to be determined and the similar face image based on an SVM (support vector machine) classifier;
determining the face image to be determined with the similarity within a preset similarity threshold as a similar user image;
the user information associated with the similar user image is obtained.
4. The face recognition method according to any one of claims 1 to 3, wherein the target feature vector comprises the outline shape of the five sense organs of the user and the corresponding position thereof.
5. A face recognition apparatus, comprising:
the target characteristic vector acquisition module is used for acquiring a target image comprising face information of a current user and extracting a target characteristic vector in the target image;
the to-be-compared feature vector obtaining module is used for correcting the target feature vector according to a preset feature correction algorithm to obtain a to-be-compared feature vector;
and the similar face image determining module is used for determining a similar face image based on the feature vector to be compared.
6. The face recognition apparatus of claim 5, further comprising:
the user information acquisition module is used for acquiring user information associated with the similar face image according to the similar face image;
and the sending module is used for sending the similar face image and the user information to a user in an association relationship.
7. The face recognition apparatus of claim 6, wherein the user information acquisition module comprises:
the similarity determining unit is used for acquiring a face image to be determined in a database and determining the similarity between the face image to be determined and the similar face image based on an SVM classifier;
the similar user image determining unit is used for determining the face image to be determined with the similarity within a preset similarity threshold as a similar user image;
a sending unit, configured to acquire the user information associated with the similar user image.
8. The face recognition device of any one of claims 5 to 7, wherein the target feature vector comprises the outline shape of the facial features of the user and the corresponding position.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the face recognition method of any one of claims 1 to 4.
10. A terminal device, comprising:
one or more processors;
a memory;
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more application programs configured to perform the steps of the face recognition method according to any one of claims 1 to 4.
CN201811253432.8A 2018-10-25 2018-10-25 Face recognition method and device, storage medium and terminal equipment Pending CN111104823A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811253432.8A CN111104823A (en) 2018-10-25 2018-10-25 Face recognition method and device, storage medium and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811253432.8A CN111104823A (en) 2018-10-25 2018-10-25 Face recognition method and device, storage medium and terminal equipment

Publications (1)

Publication Number Publication Date
CN111104823A true CN111104823A (en) 2020-05-05

Family

ID=70418914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811253432.8A Pending CN111104823A (en) 2018-10-25 2018-10-25 Face recognition method and device, storage medium and terminal equipment

Country Status (1)

Country Link
CN (1) CN111104823A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780038A (en) * 2020-06-10 2021-12-10 深信服科技股份有限公司 Picture auditing method and device, computing equipment and storage medium

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130121584A1 (en) * 2009-09-18 2013-05-16 Lubomir D. Bourdev System and Method for Using Contextual Features to Improve Face Recognition in Digital Images
JP2014149677A (en) * 2013-02-01 2014-08-21 Panasonic Corp Makeup support apparatus, makeup support system, makeup support method and makeup support system
US20150332087A1 (en) * 2014-05-15 2015-11-19 Fuji Xerox Co., Ltd. Systems and Methods for Identifying a User's Demographic Characteristics Based on the User's Social Media Photographs
CN105488463A (en) * 2015-11-25 2016-04-13 康佳集团股份有限公司 Lineal relationship recognizing method and system based on face biological features
CN106250825A (en) * 2016-07-22 2016-12-21 厚普(北京)生物信息技术有限公司 A kind of at the medical insurance adaptive face identification system of applications fields scape
CN106599797A (en) * 2016-11-24 2017-04-26 北京航空航天大学 Infrared face identification method based on local parallel nerve network
CN106600640A (en) * 2016-12-12 2017-04-26 杭州视氪科技有限公司 RGB-D camera-based face recognition assisting eyeglass
CN107133576A (en) * 2017-04-17 2017-09-05 北京小米移动软件有限公司 Age of user recognition methods and device
CN107292528A (en) * 2017-06-30 2017-10-24 阿里巴巴集团控股有限公司 Vehicle insurance Risk Forecast Method, device and server
CN107423690A (en) * 2017-06-26 2017-12-01 广东工业大学 A kind of face identification method and device
CN107480575A (en) * 2016-06-07 2017-12-15 深圳市商汤科技有限公司 The training method of model, across age face identification method and corresponding device
CN107516069A (en) * 2017-07-27 2017-12-26 中国船舶重工集团公司第七二四研究所 Target identification method based on geometry reconstruction and multiscale analysis
CN107563319A (en) * 2017-08-24 2018-01-09 西安交通大学 Face similarity measurement computational methods between a kind of parent-offspring based on image
CN107748858A (en) * 2017-06-15 2018-03-02 华南理工大学 A kind of multi-pose eye locating method based on concatenated convolutional neutral net
CN107835367A (en) * 2017-11-14 2018-03-23 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN108090433A (en) * 2017-12-12 2018-05-29 厦门集微科技有限公司 Face identification method and device, storage medium, processor
CN108319944A (en) * 2018-05-03 2018-07-24 山东汇贸电子口岸有限公司 A kind of remote human face identification system and method
CN108446687A (en) * 2018-05-28 2018-08-24 深圳市街角电子商务有限公司 A kind of adaptive face vision authentication method based on mobile terminal and backstage interconnection

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130121584A1 (en) * 2009-09-18 2013-05-16 Lubomir D. Bourdev System and Method for Using Contextual Features to Improve Face Recognition in Digital Images
JP2014149677A (en) * 2013-02-01 2014-08-21 Panasonic Corp Makeup support apparatus, makeup support system, makeup support method and makeup support system
US20150332087A1 (en) * 2014-05-15 2015-11-19 Fuji Xerox Co., Ltd. Systems and Methods for Identifying a User's Demographic Characteristics Based on the User's Social Media Photographs
CN105488463A (en) * 2015-11-25 2016-04-13 康佳集团股份有限公司 Lineal relationship recognizing method and system based on face biological features
CN107480575A (en) * 2016-06-07 2017-12-15 深圳市商汤科技有限公司 The training method of model, across age face identification method and corresponding device
CN106250825A (en) * 2016-07-22 2016-12-21 厚普(北京)生物信息技术有限公司 A kind of at the medical insurance adaptive face identification system of applications fields scape
CN106599797A (en) * 2016-11-24 2017-04-26 北京航空航天大学 Infrared face identification method based on local parallel nerve network
CN106600640A (en) * 2016-12-12 2017-04-26 杭州视氪科技有限公司 RGB-D camera-based face recognition assisting eyeglass
CN107133576A (en) * 2017-04-17 2017-09-05 北京小米移动软件有限公司 Age of user recognition methods and device
CN107748858A (en) * 2017-06-15 2018-03-02 华南理工大学 A kind of multi-pose eye locating method based on concatenated convolutional neutral net
CN107423690A (en) * 2017-06-26 2017-12-01 广东工业大学 A kind of face identification method and device
CN107292528A (en) * 2017-06-30 2017-10-24 阿里巴巴集团控股有限公司 Vehicle insurance Risk Forecast Method, device and server
CN107516069A (en) * 2017-07-27 2017-12-26 中国船舶重工集团公司第七二四研究所 Target identification method based on geometry reconstruction and multiscale analysis
CN107563319A (en) * 2017-08-24 2018-01-09 西安交通大学 Face similarity measurement computational methods between a kind of parent-offspring based on image
CN107835367A (en) * 2017-11-14 2018-03-23 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN108090433A (en) * 2017-12-12 2018-05-29 厦门集微科技有限公司 Face identification method and device, storage medium, processor
CN108319944A (en) * 2018-05-03 2018-07-24 山东汇贸电子口岸有限公司 A kind of remote human face identification system and method
CN108446687A (en) * 2018-05-28 2018-08-24 深圳市街角电子商务有限公司 A kind of adaptive face vision authentication method based on mobile terminal and backstage interconnection

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
UNSANG PARK 等: "Age-Invariant Face Recognition", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》, 15 January 2010 (2010-01-15), pages 947 - 954 *
周亮基 等: "基于NSCT和仿生模式的人脸图像识别方法", 《基于NSCT和仿生模式的人脸图像识别方法》, 10 March 2015 (2015-03-10), pages 132 - 139 *
姜涛: "室外动态场景中的人脸识别", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 March 2016 (2016-03-15), pages 138 - 5867 *
聂玲: "一种用于人脸识别的仿射包集识别模型仿真", 《计算机仿真》, 15 October 2016 (2016-10-15), pages 395 - 398 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780038A (en) * 2020-06-10 2021-12-10 深信服科技股份有限公司 Picture auditing method and device, computing equipment and storage medium

Similar Documents

Publication Publication Date Title
US10169639B2 (en) Method for fingerprint template update and terminal device
CN111985265B (en) Image processing method and device
CN109918975B (en) Augmented reality processing method, object identification method and terminal
CN107977674B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
US10353883B2 (en) Method, device and storage medium for providing data statistics
CN107948748B (en) Method, device, mobile terminal and computer storage medium for recommending videos
CN108494947B (en) Image sharing method and mobile terminal
CN110443769B (en) Image processing method, image processing device and terminal equipment
CN108022274B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN108427873B (en) Biological feature identification method and mobile terminal
US20180089520A1 (en) Method, device and computer-readable medium for updating sequence of fingerprint templates for matching
CN109409235B (en) Image recognition method and device, electronic equipment and computer readable storage medium
WO2016015471A1 (en) User churn prediction method and apparatus
CN108460817B (en) Jigsaw puzzle method and mobile terminal
WO2014180121A1 (en) Systems and methods for facial age identification
WO2017088434A1 (en) Human face model matrix training method and apparatus, and storage medium
CN110147742B (en) Key point positioning method, device and terminal
CN111738100A (en) Mouth shape-based voice recognition method and terminal equipment
CN110083742B (en) Video query method and device
CN107632985B (en) Webpage preloading method and device
CN113709385A (en) Video processing method and device, computer equipment and storage medium
CN111104823A (en) Face recognition method and device, storage medium and terminal equipment
CN115171196B (en) Face image processing method, related device and storage medium
CN107329547B (en) Temperature control method and device and mobile terminal
CN108108608B (en) Control method of mobile terminal and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination