CN111488943A - Face recognition method and device - Google Patents

Face recognition method and device Download PDF

Info

Publication number
CN111488943A
CN111488943A CN202010301613.4A CN202010301613A CN111488943A CN 111488943 A CN111488943 A CN 111488943A CN 202010301613 A CN202010301613 A CN 202010301613A CN 111488943 A CN111488943 A CN 111488943A
Authority
CN
China
Prior art keywords
face
image
face image
scene
comparison similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010301613.4A
Other languages
Chinese (zh)
Inventor
余铭华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xinyi Intelligent Technology Co ltd
Original Assignee
Shanghai Xinyi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xinyi Intelligent Technology Co ltd filed Critical Shanghai Xinyi Intelligent Technology Co ltd
Priority to CN202010301613.4A priority Critical patent/CN111488943A/en
Publication of CN111488943A publication Critical patent/CN111488943A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The application provides a face recognition method, which can compare a face image in an image containing a face with a preset target face image, determine face comparison similarity, compare scene information in the image containing the face with preset target scene information, determine scene comparison similarity, and determine a character identity corresponding to the face image according to the obtained face comparison similarity and the obtained scene comparison similarity, so that face recognition is performed according to related scene information when the quality of the obtained face image in a complex environment is poor, and the accuracy of face recognition is improved.

Description

Face recognition method and device
Technical Field
The present application relates to the field of image recognition, and in particular, to a method and an apparatus for face recognition.
Background
Face recognition is a biometric technology for identifying the identity of a person based on facial feature information of the person, and is also called portrait recognition and facial recognition. In general, the face recognition may be performed by acquiring an image or a video stream containing a face with a camera or a video camera, detecting and tracking the face in the image, and performing face recognition on the detected face, thereby determining a corresponding identity.
With the continuous development of image processing technology, experts and scholars propose various face recognition algorithms and use different technologies to realize face recognition. Generally, a variety of face recognition algorithms can be classified according to dimensions and mechanisms, the face recognition algorithms can be classified into two-dimensional face recognition algorithms and three-dimensional face recognition algorithms according to dimensions, and the face recognition algorithms can be classified into recognition algorithms based on face feature points (feature-based recognition algorithms), recognition algorithms based on whole face images (aspect-based recognition algorithms), recognition algorithms based on templates (template-based recognition algorithms), algorithms using neural networks for recognition (recognition algorithms), algorithms using support vector machines (recognition algorithms SVM), and the like according to mechanisms.
The two-dimensional face recognition algorithm mainly comprises a template matching-based method, a singular value feature-based method, a subspace analysis method, a local Preserving projection (L) preceding projects (L PP) method, a principal component analysis method, other methods such as an elastic matching method, a eigenface method (based on K L transformation), an artificial neural network method, a support vector machine method, an integral image feature-based method (adaboost learning), a probability model-based method and the like.
The three-dimensional face recognition algorithm is used for recognizing the face in the depth image, and compared with two-dimensional face recognition, the recognition accuracy is greatly improved. The three-dimensional face recognition algorithm mainly comprises an image feature-based method, a model variable parameter-based method and the like.
The face recognition algorithm is used for recognizing based on a face area obtained from an image, the quality requirement on the face image is high, and the quality of the face image obtained under the conditions of poor illumination, non-positive human body posture, human face shielding, mass face data and the like in a complex environment is not high, so that the face recognition accuracy is low.
Disclosure of Invention
An object of the present application is to provide a method and an apparatus for face recognition, which are used to solve the problem of low accuracy of face recognition in a complex environment in the prior art.
In order to achieve the above object, the present application provides a method for face recognition, wherein the method comprises:
carrying out face image recognition on an image containing a face to obtain a face image;
comparing the face image with a preset target face image to determine face comparison similarity;
carrying out scene recognition on the image containing the face to acquire scene information corresponding to the face image;
comparing the scene information corresponding to the face image with preset target scene information to determine scene comparison similarity;
and determining the identity of the person corresponding to the face image according to the face comparison similarity and the scene comparison similarity.
Further, the scene information corresponding to the face image includes one or more of the following combinations: the human body identification information corresponding to the human face image, the human body environment information corresponding to the human face image and the acquisition equipment information corresponding to the human face image.
Further, after the face image is obtained by performing face image recognition on an image containing a face, the method further includes:
and carrying out data normalization processing on the face image.
Further, comparing the face image with a preset target face image to determine the face comparison similarity, including:
extracting the features of the face image to obtain face feature information of the face image;
and comparing the face characteristic information of the face image with face characteristic information corresponding to a preset target face image to determine face comparison similarity.
Further, performing scene recognition on the image containing the face to acquire scene information corresponding to the face image, including:
carrying out target detection on the image containing the human face, and determining a target object in the image;
and determining the corresponding information of the target object as the scene information corresponding to the face image.
Further, the target detection is performed on the image containing the human face, and the determination of the target object in the image includes:
and carrying out target detection on the image containing the human face through a target segmentation algorithm and/or an object recognition algorithm to determine a target object in the image.
Further, determining the person identity corresponding to the face image according to the face comparison similarity and the scene comparison similarity, including:
determining comparison similarity according to the face comparison similarity and the scene comparison similarity;
and determining the identity of the person corresponding to the face image according to the comparison similarity.
Further, the comparison similarity S is calculated by the following formula:
S=α*S1+β*S2
wherein S is1Comparing the similarity for the face, S2For scene comparison similarity, α and β are weight coefficients, and α + β is 1.
In another aspect, the present application also provides an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, cause the apparatus to perform the aforementioned method of face recognition.
The present application also provides a computer readable medium having stored thereon computer readable instructions executable by a processor to implement the aforementioned method of face recognition.
Compared with the prior art, the scheme provided by the application can compare the face image in the image containing the face with the preset target face image, determine the face comparison similarity, then compare the scene information in the image containing the face with the preset target scene information, determine the scene comparison similarity, and then determine the character identity corresponding to the face image according to the obtained face comparison similarity and the scene comparison similarity, so that the face recognition is performed according to the related scene information when the quality of the face image obtained in a complex environment is poor, and the accuracy of the face recognition is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 is a flow chart of a method of face recognition provided in some embodiments of the present application;
FIG. 2 is a schematic diagram of an example of a facial image for data normalization provided in some preferred embodiments of the present application;
fig. 3 is an example of an image including a human face according to some preferred embodiments of the present application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal and the network device each include one or more processors (CPUs), input/output interfaces, network interfaces, and memories.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
Fig. 1 illustrates a method for face recognition according to some embodiments of the present application, where the method specifically includes the following steps:
step S101, carrying out face image recognition on an image containing a face to obtain a face image;
step S102, comparing the face image with a preset target face image to determine face comparison similarity;
step S103, carrying out scene recognition on the image containing the face to acquire scene information corresponding to the face image;
step S104, comparing the scene information corresponding to the face image with preset target scene information, and determining scene comparison similarity;
and S105, determining the identity of the person corresponding to the face image according to the face comparison similarity and the scene comparison similarity.
The method is particularly suitable for a scene for carrying out identity recognition according to the acquired image containing the face, and can be used for comparing the recognized face image with a preset target face image, then comparing the recognized scene information with preset target scene information, and determining the identity of a person corresponding to the face image according to the results of the two comparisons.
In step S101, first, a face image is identified from an image including a face, and a face image is acquired. Here, the image including the human face is acquired by an image acquisition device, and the image acquisition device may be a variety of devices for acquiring images, such as a camera, a video camera, a smart phone, a tablet computer, and the like. All images acquired by the image acquisition equipment can be divided into two types according to whether the images contain human faces or not, the images containing the human faces and the images not containing the human faces do not work for identity recognition, and the images not containing the human faces can be removed by a manual mode or an automatic human face recognition method.
Preferably, after the image including the face is obtained, the image may be normalized to be cut to a predetermined image size.
The method comprises the steps of carrying out face image recognition on an image containing a face, namely carrying out face detection in the image, detecting the face from the image and determining a face range. The development of the face detection algorithm can be roughly divided into 3 stages: an early algorithm phase, an AdaBoost framework phase, and a deep learning phase. The face detection in the early algorithm stage uses a template matching technology, that is, a face template image is matched with each position in the detected image to determine whether a face exists at the position, and then a machine learning algorithm such as a neural network and a support vector machine is also used for detecting the face, and the basic principle is to classify whether an image area belongs to the face. The basic idea of the AdaBoost framework for face detection is that a strong classifier with high accuracy is constructed by a plurality of simple weak classifiers, so that the detection of a face similar to the front face can be better solved, a large number of improvement schemes appear later, and the AdaBoost framework is always the main framework of a face detection algorithm before the occurrence of a deep learning technology. In the deep learning stage, the deep convolutional neural network can well learn the characteristics of each level of a target object in an image, the anti-interference capability to the outside is stronger, and the current face detection algorithm is basically the optimization algorithm of the deep convolutional neural network.
Because the existing face detection technology is relatively mature, a plurality of face detection algorithms can be selected for use, in some embodiments of the application, the used face detection algorithm is not specifically limited, and a user can select the existing face detection algorithm according to the needs.
In some embodiments of the present application, after the face image is obtained by performing face image recognition on an image including a face, data normalization processing may be performed on the obtained face image. The human face data normalization means that the photos of the same person shot under different imaging conditions such as illumination intensity, direction, distance, posture and the like have consistency by means of translation, rotation, scaling, cutting and the like of the images, and can include geometric normalization and gray level normalization.
Preferably, the data normalization of the face image may comprise the steps of:
1) detecting and positioning a human face;
2) determining positions of both eyes;
3) detecting a positioning mouth corner;
4) determining the accurate positions of pupils of two eyes based on the eye areas, and performing transverse scale scaling and horizontal rotation by taking the distance between the two pupils and a direction axis as a reference;
5) zooming the face image in the longitudinal axis direction according to a connecting line of the midpoint of the horizontal axis and the midpoint of the mouth corner;
6) cutting the image according to the positions of two eyes and the lower jaw by adopting a method of positioning the pupil and the centers of two corners of the mouth at 3 points;
7) the intercepted image is unified into an image with a set size, and data normalization of the face image is completed.
An example of a facial image for data normalization in some preferred embodiments of the present application is shown in fig. 2, and the data normalization of the facial image may specifically include the following steps:
1) calibrating characteristic points including two eyes and a mouth angle;
2) rotating the image according to coordinate values of the left eye and the right eye to ensure the consistency of the human face direction, wherein the distance between the two eyes is set as d, and the midpoint of the distance is set as O;
3) determining a rectangular feature region according to the facial feature points and the geometric model, and taking O as a reference, respectively cutting d on the left and right, and respectively taking 0.5d rectangular region and 1.5d rectangular region in the vertical direction for cutting;
4) and (4) adjusting the size of the cut region image, adjusting the image into an image with a specified size, such as 100 x 100, and realizing data normalization of the face image.
In step S102, the obtained face image is compared with a preset target face image, and face comparison similarity is determined. The obtained face image may be an image subjected to data normalization or an image without data normalization. The preset target face image is a collection of face images collected in advance for comparison of target persons, and at least comprises one target face image, and the target face image comprises preset corresponding person identity information, such as person identification, name, gender, age and the like. Preferably, the preset target face image can be implemented in the form of a face image information base, and the face image information base can be indexed to facilitate rapid comparison.
In some embodiments of the present application, comparing the obtained face image with a preset target face image to determine a face comparison similarity, may include the following steps: extracting the features of the face image to obtain face feature information of the face image; and comparing the face characteristic information of the face image with face characteristic information corresponding to a preset target face image to determine face comparison similarity. In order to simplify the comparison difficulty of the face images and improve the comparison efficiency, the face feature information of the face images is compared with the face feature information of a preset target face image, and the face comparison similarity is determined according to the comparison result of the face feature information. Specifically, the face features in the face image can be extracted through a face feature extraction algorithm, and the face feature extraction can be realized by adopting a deep learning algorithm, such as a deep convolutional neural network algorithm. Each target face image in the target face image set can obtain corresponding face feature information by performing face feature extraction in advance, so that the corresponding relation between the target face image and the face feature information is established.
In addition, a plurality of image comparison algorithms can be adopted for comparing the face image with the preset target face image, such as an artificial neural network, a decision tree, a K-nearest neighbor, a support vector machine, a convolutional neural network and the like.
In some embodiments of the present application, after determining the face comparison similarity, the comparison similarity corresponding to each target face image may be compared with a preset threshold, for example, the preset threshold is 0.6, and the target face image higher than the threshold is determined as a candidate face image and is used as a possible face image set corresponding to the face image.
In step S103, scene recognition is performed on the image including the face, and scene information corresponding to the obtained face image is acquired. Before performing scene recognition on an image including a face, the face comparison similarity obtained in step S102 may be determined, and if the face comparison similarity corresponding to a certain target face image is higher than a preset threshold, which is 0.85, for example, it indicates that the face image has high comparison similarity with the target face image, and the scene recognition may not be performed, and the identity corresponding to the face image may be determined directly according to the person identity information corresponding to the target face image; and if the face comparison similarity corresponding to all the target face images is lower than the preset threshold, scene recognition is required to be carried out, and the scene comparison similarity is determined.
In some embodiments of the present application, the scene information corresponding to the face image includes one or more of the following combinations: human body identification information corresponding to the human face image, human body environment information corresponding to the human face image and acquisition equipment information corresponding to the human face image. In this case, the image including the face often includes not only the face but also a human body corresponding to the face and a surrounding environment where the human body is located, and by additionally considering these pieces of information, the person identity can be better recognized. The human body identification information refers to some information that can be used to identify a human body, and may include, but is not limited to: the posture, the accessories, the clothes, the height and the like. The human environment information is some information for describing the environment where the human body is located, and may include, but is not limited to, a reference object, geographical location information, and the like. The acquisition device information is used to describe device-related information for acquiring the image including the face image, such as installation location information of the acquisition device.
Fig. 3 shows an example of an image including a face according to some preferred embodiments of the present application, and by performing scene recognition on fig. 3, the scene information corresponding to the face image is obtained as follows: indoor, entrance of passageway, dark short hair, not wearing cap, not wearing glasses, not wearing gauze mask, white jacket, male, etc.
In some embodiments of the present application, scene recognition is performed on an image including a face to obtain scene information corresponding to the face image, and the following steps may be specifically adopted: carrying out target detection on an image containing a human face, and determining a target object in the image; and determining the corresponding information of the target object as scene information corresponding to the face image. In addition to the face, the image containing the face may include various target objects, and related information helpful for identifying the corresponding identity of the face exists in the target objects. Therefore, whether the target objects or the target object types exist in the image containing the human face can be detected by presetting some target objects related to identity recognition. The preset target object in the image is detected, and the detection can be realized through a convolutional neural network and an optimization network thereof.
Preferably, the target object in the image can be determined by performing target detection on the image containing the human face through a target segmentation algorithm and/or an object recognition algorithm. The object segmentation algorithm is used to divide the image into a plurality of specific regions with unique properties, and the obtained regions are the objects after image segmentation. The target segmentation algorithm can be roughly divided into the following types: the method includes a threshold method, a region growing method, edge detection, feature clustering, a histogram and the like, and in some embodiments of the present application, a suitable target segmentation algorithm may be selected according to user requirements, which is not specifically limited herein. The target segmentation algorithm can also comprise the steps of carrying out object identification on the segmented region object and determining the object of the segmented region; if the object recognition is not included in the target segmentation algorithm, the image region obtained by the target segmentation may be input to the object recognition algorithm to determine whether the region object is a preset target object. The object recognition is preferably to classify the obtained target object and determine the obtained target object as a preset target object.
In step S104, comparing the scene information corresponding to the face image with preset target scene information, and determining a scene comparison similarity. Here, the preset target scene information is a set of scene information collected in advance for performing target person comparison. Preferably, each piece of target scene information corresponds to a preset target face image, that is, each piece of target face image has corresponding preset target scene information.
In some embodiments of the application, the scene information corresponding to the face image is represented in a text form, the preset target scene information is also stored in a text form, and when the scene information corresponding to the face image is compared with the preset target scene information, a text matching method is adopted for comparison, so that the scene comparison similarity is obtained. Each piece of target scene information may include multiple pieces of target scene sub-information, and when performing comparison, the scene sub-information in the scene information corresponding to the face image and the scene sub-information in the target scene information may be correspondingly compared according to the type of the scene sub-information, so as to obtain the scene sub-information comparison similarity. Preferably, the maximum value of the scene sub-information comparison similarity corresponding to each piece of target scene information is determined as the scene comparison similarity corresponding to the item target scene information.
In step S105, the person identity corresponding to the face image is determined according to the face comparison similarity and the scene comparison similarity. In some embodiments of the present application, the method specifically includes the following steps: and determining comparison similarity according to the face comparison similarity and the scene comparison similarity, and determining the identity of the person corresponding to the face image according to the obtained comparison similarity.
In some embodiments of the present application, the alignment similarity S can be calculated by the following formula:
S=α*S1+β*S2
wherein S is1Comparing the similarity for the face, S2For scene comparison similarity, α and β are weight coefficients, and α + β is 1.
Here, the scene alignment similarity S2The maximum value of the similarity for the sub-information of a plurality of scenes is compared α, β can be adjusted according to the actual working condition.
Here, each target face image corresponds to corresponding person identity information, and the person identity corresponding to the comparison similarity can be determined according to the target face image corresponding to the comparison similarity.
In some embodiments of the present application, the person identities corresponding to the face images are determined according to the comparison similarities, and may also be sorted according to the comparison similarities, and a plurality of person identities corresponding to a plurality of comparison similarities with higher values are determined as the final recognition result. Here, the final recognition result is a plurality of possible identity information for the user to refer to.
In some preferred embodiments, the preset target face image may be implemented as a face comparison target library, the preset target scene information may be implemented as an auxiliary comparison target library, and the auxiliary comparison target library is used as an auxiliary extension of the face comparison target library and may be stored in an associated manner through a face image identifier. In the process of face comparison, firstly, comparing the obtained face image with a face comparison target library T1, taking a T1 record with the face comparison similarity S1 higher than a preset threshold a as a comparison result set, marking as R1, sorting the records in R1 from high to low according to the face comparison similarity, and performing auxiliary comparison if the face comparison similarities in R1 are all smaller than a preset threshold b. In the auxiliary comparison process, an auxiliary comparison subset Q is obtained from an auxiliary comparison target library T2 according to the face image identifier, then the scene information corresponding to the face image is compared with the subset Q, the Q record with the scene comparison similarity higher than a preset threshold c is used as a comparison result set R2, then the comparison similarity S is calculated according to the scene comparison similarity S2 of R2 and the face comparison similarity S1 of R1, and then the sequence recorded in R1 is adjusted according to the comparison similarity S, so as to obtain a final result set R.
Some embodiments of the present application also provide an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, cause the apparatus to perform the aforementioned method of face recognition.
Some embodiments of the present application also provide a computer readable medium having stored thereon computer readable instructions executable by a processor to implement the aforementioned method of face recognition.
Compared with the prior art, the scheme provided by the application can compare the face image in the image containing the face with the preset target face image, determine the face comparison similarity, then compare the scene information in the image containing the face with the preset target scene information, determine the scene comparison similarity, and then determine the character identity corresponding to the face image according to the obtained face comparison similarity and the scene comparison similarity, so that the face recognition is performed according to the related scene information when the quality of the face image obtained in a complex environment is poor, and the accuracy of the face recognition is improved.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the present application comprises a device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the device to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (10)

1. A method of face recognition, wherein the method comprises:
carrying out face image recognition on an image containing a face to obtain a face image;
comparing the face image with a preset target face image to determine face comparison similarity;
carrying out scene recognition on the image containing the face to acquire scene information corresponding to the face image;
comparing the scene information corresponding to the face image with preset target scene information to determine scene comparison similarity;
and determining the identity of the person corresponding to the face image according to the face comparison similarity and the scene comparison similarity.
2. The method of claim 1, wherein the scene information corresponding to the face image comprises one or more of the following combinations: the human body identification information corresponding to the human face image, the human body environment information corresponding to the human face image and the acquisition equipment information corresponding to the human face image.
3. The method of claim 1, wherein the face image recognition of the image containing the face, after obtaining the face image, further comprises:
and carrying out data normalization processing on the face image.
4. The method of claim 1, wherein comparing the face image with a preset target face image to determine a face comparison similarity comprises:
extracting the features of the face image to obtain face feature information of the face image;
and comparing the face characteristic information of the face image with face characteristic information corresponding to a preset target face image to determine face comparison similarity.
5. The method of claim 1, wherein performing scene recognition on the image containing the face to obtain scene information corresponding to the face image comprises:
carrying out target detection on the image containing the human face, and determining a target object in the image;
and determining the corresponding information of the target object as the scene information corresponding to the face image.
6. The method of claim 5, wherein the performing target detection on the image containing the human face and determining the target object in the image comprises:
and carrying out target detection on the image containing the human face through a target segmentation algorithm and/or an object recognition algorithm to determine a target object in the image.
7. The method of claim 1, wherein determining the identity of the person corresponding to the face image according to the face comparison similarity and the scene comparison similarity comprises:
determining comparison similarity according to the face comparison similarity and the scene comparison similarity;
and determining the identity of the person corresponding to the face image according to the comparison similarity.
8. The method of claim 7, wherein the alignment similarity S is calculated by the formula:
S=α*S1+β*2
wherein S is1Comparing the similarity for the face, S2For scene comparison similarity, α and β are weight coefficients, and α + β is 1.
9. An apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, cause the apparatus to perform the method of any of claims 1 to 8.
10. A computer readable medium having computer readable instructions stored thereon which are executable by a processor to implement the method of any one of claims 1 to 8.
CN202010301613.4A 2020-04-16 2020-04-16 Face recognition method and device Pending CN111488943A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010301613.4A CN111488943A (en) 2020-04-16 2020-04-16 Face recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010301613.4A CN111488943A (en) 2020-04-16 2020-04-16 Face recognition method and device

Publications (1)

Publication Number Publication Date
CN111488943A true CN111488943A (en) 2020-08-04

Family

ID=71798898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010301613.4A Pending CN111488943A (en) 2020-04-16 2020-04-16 Face recognition method and device

Country Status (1)

Country Link
CN (1) CN111488943A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914769A (en) * 2020-08-06 2020-11-10 腾讯科技(深圳)有限公司 User validity judging method, device, computer readable storage medium and equipment
CN111985360A (en) * 2020-08-05 2020-11-24 上海依图网络科技有限公司 Face recognition method, device, equipment and medium
CN112182008A (en) * 2020-10-27 2021-01-05 青岛以萨数据技术有限公司 System, method, terminal and medium for analyzing face picture data acquired by mobile terminal
CN112446362A (en) * 2020-12-16 2021-03-05 上海芯翌智能科技有限公司 Face picture file processing method and device
CN113486712A (en) * 2021-05-31 2021-10-08 内蒙古大学 Multi-face recognition method, system and medium based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136533A (en) * 2011-11-28 2013-06-05 汉王科技股份有限公司 Face recognition method and device based on dynamic threshold value
CN104408404A (en) * 2014-10-31 2015-03-11 小米科技有限责任公司 Face identification method and apparatus
CN107527040A (en) * 2017-09-07 2017-12-29 维沃移动通信有限公司 A kind of method and apparatus of recognition of face

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136533A (en) * 2011-11-28 2013-06-05 汉王科技股份有限公司 Face recognition method and device based on dynamic threshold value
CN104408404A (en) * 2014-10-31 2015-03-11 小米科技有限责任公司 Face identification method and apparatus
CN107527040A (en) * 2017-09-07 2017-12-29 维沃移动通信有限公司 A kind of method and apparatus of recognition of face

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985360A (en) * 2020-08-05 2020-11-24 上海依图网络科技有限公司 Face recognition method, device, equipment and medium
CN111914769A (en) * 2020-08-06 2020-11-10 腾讯科技(深圳)有限公司 User validity judging method, device, computer readable storage medium and equipment
CN111914769B (en) * 2020-08-06 2024-01-26 腾讯科技(深圳)有限公司 User validity determination method, device, computer readable storage medium and equipment
CN112182008A (en) * 2020-10-27 2021-01-05 青岛以萨数据技术有限公司 System, method, terminal and medium for analyzing face picture data acquired by mobile terminal
CN112446362A (en) * 2020-12-16 2021-03-05 上海芯翌智能科技有限公司 Face picture file processing method and device
CN113486712A (en) * 2021-05-31 2021-10-08 内蒙古大学 Multi-face recognition method, system and medium based on deep learning
CN113486712B (en) * 2021-05-31 2022-09-06 内蒙古大学 Multi-face recognition method, system and medium based on deep learning

Similar Documents

Publication Publication Date Title
Singh et al. Face detection and recognition system using digital image processing
CN111488943A (en) Face recognition method and device
TW202006602A (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
US8064653B2 (en) Method and system of person identification by facial image
CN106778450B (en) Face recognition method and device
US20130163829A1 (en) System for recognizing disguised face using gabor feature and svm classifier and method thereof
Ali et al. A real-time deformable detector
Zhou et al. Histograms of categorized shapes for 3D ear detection
CN104143076B (en) The matching process of face shape and system
Manyam et al. Two faces are better than one: Face recognition in group photographs
US20120308141A1 (en) Information processing apparatus and method of processing information, storage medium and program
KR20170015639A (en) Personal Identification System And Method By Face Recognition In Digital Image
WO2019007253A1 (en) Image recognition method, apparatus and device, and readable medium
WO2022213396A1 (en) Cat face recognition apparatus and method, computer device, and storage medium
Kheirkhah et al. A hybrid face detection approach in color images with complex background
Wu et al. Privacy leakage of sift features via deep generative model based image reconstruction
Andiani et al. Face recognition for work attendance using multitask convolutional neural network (MTCNN) and pre-trained facenet
Mishra et al. “Face Recognition Approaches: A Survey
JP3962517B2 (en) Face detection method and apparatus, and computer-readable medium
US20240087352A1 (en) System for identifying companion animal and method therefor
Paul et al. Automatic adaptive facial feature extraction using CDF analysis
Granda et al. Face recognition systems in math classroom through computer vision traditional techniques
KR100711223B1 (en) Face recognition method using Zernike/LDA and recording medium storing the method
Mall et al. A neural network based face detection approach
Kaur et al. An analysis on gender classification and age estimation approaches

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200804