CN113657457A - Local face matching method and system, storage medium and electronic equipment - Google Patents

Local face matching method and system, storage medium and electronic equipment Download PDF

Info

Publication number
CN113657457A
CN113657457A CN202110848435.1A CN202110848435A CN113657457A CN 113657457 A CN113657457 A CN 113657457A CN 202110848435 A CN202110848435 A CN 202110848435A CN 113657457 A CN113657457 A CN 113657457A
Authority
CN
China
Prior art keywords
face
features
picture
human
range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110848435.1A
Other languages
Chinese (zh)
Inventor
苏安炀
唐大闰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Minglue Artificial Intelligence Group Co Ltd
Original Assignee
Shanghai Minglue Artificial Intelligence Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Minglue Artificial Intelligence Group Co Ltd filed Critical Shanghai Minglue Artificial Intelligence Group Co Ltd
Priority to CN202110848435.1A priority Critical patent/CN113657457A/en
Publication of CN113657457A publication Critical patent/CN113657457A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The application discloses a matching method, a system, a storage medium and an electronic device of a local face, wherein the matching method comprises the following steps: a pretreatment step: preprocessing a pre-collected first face picture without occlusion to obtain a face sample, extracting features of the face sample by using a neural network to obtain first face features, and storing the first face features in a database; acquiring human face features: acquiring a second face picture in real time, preprocessing the second face picture and then acquiring second face features: matching: and comparing the second face features with the first face features in sequence, and outputting the detected first face features with the closest distance as the matching result of the second face picture. The human face shelter is cut, spliced again and input into a network structure, so that the application in different scenes is realized.

Description

Local face matching method and system, storage medium and electronic equipment
Technical Field
The invention belongs to the field of matching of local faces, and particularly relates to a matching method and system of local faces, a storage medium and electronic equipment.
Background
The process of extracting the face features for deep learning comprises the following steps: the flow of face feature extraction generally includes face detection, face key point detection, face correction, face feature extraction, and face feature comparison. Specifically, the face recognition process firstly detects the position of the face in a picture, cuts the position of the face, acquires the positions of key points such as facial features, and the like, acquires the posture of the face by calculating the mathematical relationship among the key points, and corrects the picture. And inputting the corrected front face picture obtained by cutting into a human face feature comparison network to obtain a human face feature vector. Finally, comparing the characteristic vectors of the human faces to obtain whether the human faces are the same.
Human head detection: the method is mainly applied to scenes such as crowd technology, human body tracking and the like. Specifically, a video acquired by a monitoring camera is sliced from a time dimension, a sequence picture is acquired, head feature information in the picture is analyzed by calling a deep learning target detection network, and the position of the head is output.
Extracting the face features: neural network mechanisms that are performed for certain features of the face. Face feature extraction, also known as face characterization, is a process of feature modeling for a face. The methods for extracting human face features are classified into two main categories: one is a knowledge-based characterization method; the other one is a characterization method based on algebraic features or statistical learning, and the patent refers to a characterization method based on statistical learning.
In patent CN112418190A, a sample face local image set can be obtained by training a preset eye key point model to perform face key point detection on a sample face global image set, a medical protection mask face recognition model for a server is trained by using the sample face local image set, a medical protection mask face recognition model for a mobile terminal is trained by combining a knowledge distillation algorithm, and the trained medical protection mask face recognition model for the mobile terminal is issued to the mobile terminal, so that a face to be recognized is recognized by the mobile terminal and the trained medical protection mask face recognition model for the mobile terminal to determine identity information of the face to be recognized. Therefore, the influence of the mask on the accuracy of face recognition can be reduced as much as possible, and the accuracy of face recognition is improved on the premise that the mask is not required to be taken off by an object to be recognized.
The patent CN112418190A has the same design concept as the patent, and is all for human eye part cutting, but the patent does not use knowledge distillation algorithm to train the model, but directly uses the cut human eye detail picture to perform face recognition.
The prior art is generally used for scenes such as a face gate, camera monitoring and the like. The application scene of the patent is different from the above, and is the recognition and analysis of the human face in the existing photo and photo library, specifically, the human face matching under the shielding condition.
Disclosure of Invention
The embodiment of the application provides a local face matching method, a local face matching system, a storage medium and electronic equipment, and aims to at least solve the problem of application scene limitation of the existing local face matching method.
The invention provides a matching method of local human faces, which comprises the following steps:
a pretreatment step: preprocessing a pre-collected first face picture without occlusion to obtain a face sample, extracting features of the face sample by using a neural network to obtain first face features, and storing the first face features in a database;
acquiring human face features: acquiring a second face picture in real time, preprocessing the second face picture and then acquiring second face features:
matching: and comparing the second face features with the first face features in sequence, and outputting the detected first face features with the closest distance as the matching result of the second face picture.
The matching method described above, wherein the preprocessing step includes:
a detection step: detecting the first face picture to obtain all face ranges in the first face picture;
an acquisition step: performing key point detection on the whole face range, and acquiring a local face range according to the position of a key point;
a correction step: correcting the local face range and the whole face range;
splicing: splicing the local face range and the whole face range to obtain a final face sample;
a characteristic extraction step: and performing feature extraction on the final face sample through the neural network to obtain the first face feature and storing the first face feature into the database.
In the matching method, the step of obtaining the face features includes:
a face extraction step: collecting the second face picture and extracting a second face feature of each face from the second face picture;
a comparison step: and comparing the face feature distance between the second face features and the first face features to obtain the first face features with the shortest distance.
In the matching method, the second face picture is a face picture after being shielded.
The invention also provides a matching system of local human faces, which comprises the following components:
the system comprises a preprocessing module, a database and a display module, wherein the preprocessing module is used for preprocessing a first face picture which is acquired in advance and is not shielded to obtain a face sample, extracting features of the face sample by using a neural network to obtain a first face feature and storing the first face feature in the database;
the face feature acquisition module acquires a second face picture in real time, and acquires second face features after preprocessing the second face picture:
and the matching module compares the second human face features with the first human face features in sequence, and outputs the detected first human face features with the closest distance as the matching result of the second human face picture.
The matching system, wherein the preprocessing module comprises:
the detection unit is used for detecting the first face picture to obtain all face ranges in the first face picture;
the acquisition unit detects key points of the whole face range and acquires a local face range according to the positions of the key points;
a correction unit that corrects the local face range and the entire face range;
the splicing unit splices the local face range and the whole face range to obtain a final face sample;
and the characteristic extraction unit is used for carrying out characteristic extraction on the final human face sample through the neural network to obtain the first human face characteristic and storing the first human face characteristic into the database.
The matching system, wherein the module for obtaining the face features comprises:
the face extraction unit is used for acquiring the second face picture and extracting second face features of each face from the second face picture;
and the comparison unit is used for comparing the face feature distances of the second face features with the first face features to obtain the first face features with the shortest distances.
In the matching system, the second face picture is a face picture after being shielded.
An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the matching method as described in any of the above when executing the computer program.
A storage medium having stored thereon a computer program, wherein the program when executed by a processor implements a matching method as described in any of the above.
The invention has the beneficial effects that:
the invention belongs to the field of computer vision in the deep learning technology. The invention provides a matching method of a face with a shielding function, which is characterized in that the technical route is that the face shielding object is cut and spliced again and is input into a network structure, and simultaneously, a plurality of faces can be extracted from one photo to match the face, so that the method is applied to application scenes such as photo albums, face shielding removal and the like.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application.
In the drawings:
FIG. 1 is a flow chart of a local face matching method of the present invention;
FIG. 2 is a flow chart of substep S1 of the present invention;
FIG. 3 is a flow chart of substep S2 of the present invention;
FIG. 4 is a schematic diagram of global and local facial features of the present invention;
FIG. 5 is a schematic diagram of the local face matching system of the present invention;
fig. 6 is a frame diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The present invention is described in detail with reference to the embodiments shown in the drawings, but it should be understood that these embodiments are not intended to limit the present invention, and those skilled in the art should understand that functional, methodological, or structural equivalents or substitutions made by these embodiments are within the scope of the present invention.
Before describing in detail the various embodiments of the present invention, the core inventive concepts of the present invention are summarized and described in detail by the following several embodiments.
The first embodiment is as follows:
referring to fig. 1, fig. 1 is a flowchart of a local face matching method. As shown in fig. 1, the local face matching method of the present invention includes:
preprocessing step S1: preprocessing a pre-collected first face picture without occlusion to obtain a face sample, extracting features of the face sample by using a neural network to obtain first face features, and storing the first face features in a database;
a face feature acquisition step S2: acquiring a second face picture in real time, preprocessing the second face picture and then acquiring second face features:
matching step S3: and comparing the second face features with the first face features in sequence, and outputting the detected first face features with the closest distance as the matching result of the second face picture.
Referring to fig. 2, fig. 2 is a flowchart of the preprocessing step S1. As shown in fig. 2, the preprocessing step S1 includes:
detection step S11: detecting the first face picture to obtain all face ranges in the first face picture;
acquisition step S12: performing key point detection on the whole face range, and acquiring a local face range according to the position of a key point;
correction step S13: correcting the local face range and the whole face range;
splicing step S14: splicing the local face range and the whole face range to obtain a final face sample;
a feature extraction step S15: and performing feature extraction on the final face sample through the neural network to obtain the first face feature and storing the first face feature into the database.
Referring to fig. 3, fig. 3 is a flowchart of the step S2 of obtaining human face features. As shown in fig. 3, the step S2 of obtaining the face features includes:
a face extraction step S21: collecting the second face picture and extracting a second face feature of each face from the second face picture;
comparison step S22: and comparing the face feature distance between the second face features and the first face features to obtain the first face features with the shortest distance.
And the second face picture is the face picture after being shielded.
Specifically, the technical route of the invention is as follows:
1. as shown in fig. 4, a user inputs a face picture without a mask, and the picture (first picture) is preprocessed to obtain a first face;
2. extracting features of the first face by using a neural network, wherein the features comprise a global face and a local face and are stored in a bottom library, and the bottom library can be provided with a plurality of faces with a plurality of user IDs;
3. the user uploads the shot pictures, carries out preprocessing and acquires the number of human faces and all human face positions, and the number of the human face positions can be multiple, and the judgment is respectively carried out:
3.1 when a plurality of faces exist, extracting the features of each face (second face) and comparing the distances of the features with the features in the bottom library in sequence;
3.2 when one face exists, directly extracting the features of the face photo (a second face) of the user;
4. and sequentially comparing the distances with the face feature distances in the library, and obtaining the closest distance for each detected face as a matching result.
The pretreatment process comprises the following steps:
1. detecting the human faces, wherein all the human faces in the picture are detected, and each human face is in a global human face range;
2. carrying out key point detection on the detected face, and acquiring a local face range according to the positions of key points such as eyes, a nose and the like;
3. correcting the face picture by using the key point detection and the result thereof;
4. splicing the face picture and the local face picture to obtain a final face picture;
training process:
the same as a conventional face recognition model, a spliced picture displayed in the picture is input, the output is a face id, the label is also a face real id number, and the used data set is collected Asian face data. The loss function is ArcFace.
Example two:
referring to fig. 5, fig. 5 is a schematic structural diagram of a local face matching system according to the present invention. As shown in fig. 5, the present invention provides a local face matching system, which includes:
the system comprises a preprocessing module, a database and a display module, wherein the preprocessing module is used for preprocessing a first face picture which is acquired in advance and is not shielded to obtain a face sample, extracting features of the face sample by using a neural network to obtain a first face feature and storing the first face feature in the database;
the face feature acquisition module acquires a second face picture in real time, and acquires second face features after preprocessing the second face picture:
and the matching module compares the second human face features with the first human face features in sequence, and outputs the detected first human face features with the closest distance as the matching result of the second human face picture.
Wherein the preprocessing module comprises:
the detection unit is used for detecting the first face picture to obtain all face ranges in the first face picture;
the acquisition unit detects key points of the whole face range and acquires a local face range according to the positions of the key points;
a correction unit that corrects the local face range and the entire face range;
the splicing unit splices the local face range and the whole face range to obtain a final face sample;
and the characteristic extraction unit is used for carrying out characteristic extraction on the final human face sample through the neural network to obtain the first human face characteristic and storing the first human face characteristic into the database.
Wherein, the module for acquiring the face features comprises:
the face extraction unit is used for acquiring the second face picture and extracting second face features of each face from the second face picture;
and the comparison unit is used for comparing the face feature distances of the second face features with the first face features to obtain the first face features with the shortest distances.
And the second face picture is the face picture after being shielded.
Example three:
referring to fig. 6, this embodiment discloses a specific implementation of an electronic device. The electronic device may include a processor 81 and a memory 82 storing computer program instructions.
Specifically, the processor 81 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 82 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 82 may include a Hard Disk Drive (Hard Disk Drive, abbreviated to HDD), a floppy Disk Drive, a Solid State Drive (SSD), flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 82 may include removable or non-removable (or fixed) media, where appropriate. The memory 82 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 82 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, Memory 82 includes Read-Only Memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), Electrically rewritable ROM (EAROM), or FLASH Memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a Static Random-Access Memory (SRAM) or a Dynamic Random-Access Memory (DRAM), where the DRAM may be a Fast Page Mode Dynamic Random-Access Memory (FPMDRAM), an Extended data output Dynamic Random-Access Memory (EDODRAM), a Synchronous Dynamic Random-Access Memory (SDRAM), and the like.
The memory 82 may be used to store or cache various data files for processing and/or communication use, as well as possible computer program instructions executed by the processor 81.
The processor 81 reads and executes the computer program instructions stored in the memory 82 to implement any one of the above-described local face matching methods.
In some of these embodiments, the electronic device may also include a communication interface 83 and a bus 80. As shown in fig. 6, the processor 81, the memory 82, and the communication interface 83 are connected via the bus 80 to complete communication therebetween.
The communication interface 83 is used for implementing communication between modules, devices, units and/or equipment in the embodiment of the present application. The communication port 83 may also be implemented with other components such as: the data communication is carried out among external equipment, image/data acquisition equipment, a database, external storage, an image/data processing workstation and the like.
The bus 80 includes hardware, software, or both to couple the components of the electronic device to one another. Bus 80 includes, but is not limited to, at least one of the following: data Bus (Data Bus), Address Bus (Address Bus), Control Bus (Control Bus), Expansion Bus (Expansion Bus), and Local Bus (Local Bus). By way of example, and not limitation, Bus 80 may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (FSB), a Hyper Transport (HT) Interconnect, an ISA (ISA) Bus, an InfiniBand (InfiniBand) Interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a microchannel Architecture (MCA) Bus, a PCI (Peripheral Component Interconnect) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a Video Electronics Bus (audio Electronics Association), abbreviated VLB) bus or other suitable bus or a combination of two or more of these. Bus 80 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The electronic device may implement the methods described in connection with fig. 1-3 based on matching of local faces.
In addition, in combination with the local face matching method in the foregoing embodiment, the embodiment of the present application may provide a computer-readable storage medium to implement the method. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any one of the above-described embodiments of the local face matching method.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
In summary, the beneficial effects of the invention are that the patent provides a matching method for a face with a shelter, the proposed technical route is to cut and splice again when the face shelters and input the spliced object into a network structure, and simultaneously, a plurality of faces can be extracted from one photo to match the face, so that the matching method can be applied to photo albums, face sheltering removal and other application scenes.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A matching method of local human faces is characterized by comprising the following steps:
a pretreatment step: preprocessing a pre-collected first face picture without occlusion to obtain a face sample, extracting features of the face sample by using a neural network to obtain first face features, and storing the first face features in a database;
acquiring human face features: acquiring a second face picture in real time, preprocessing the second face picture and then acquiring second face features:
matching: and comparing the second face features with the first face features in sequence, and outputting the detected first face features with the closest distance as the matching result of the second face picture.
2. The matching method of claim 1, wherein the preprocessing step comprises:
a detection step: detecting the first face picture to obtain all face ranges in the first face picture;
an acquisition step: performing key point detection on the whole face range, and acquiring a local face range according to the position of a key point;
a correction step: correcting the local face range and the whole face range;
splicing: splicing the local face range and the whole face range to obtain a final face sample;
a characteristic extraction step: and performing feature extraction on the final face sample through the neural network to obtain the first face feature and storing the first face feature into the database.
3. A matching method according to claim 1, wherein the step of obtaining the face features comprises:
a face extraction step: collecting the second face picture and extracting a second face feature of each face from the second face picture;
a comparison step: and comparing the face feature distance between the second face features and the first face features to obtain the first face features with the shortest distance.
4. The matching method according to claim 1, wherein the second face picture is a face picture after occlusion.
5. A partial face matching system, comprising:
the system comprises a preprocessing module, a database and a display module, wherein the preprocessing module is used for preprocessing a first face picture which is acquired in advance and is not shielded to obtain a face sample, extracting features of the face sample by using a neural network to obtain a first face feature and storing the first face feature in the database;
the face feature acquisition module acquires a second face picture in real time, and acquires second face features after preprocessing the second face picture:
and the matching module compares the second human face features with the first human face features in sequence, and outputs the detected first human face features with the closest distance as the matching result of the second human face picture.
6. The matching system of claim 5, wherein the preprocessing module comprises:
the detection unit is used for detecting the first face picture to obtain all face ranges in the first face picture;
the acquisition unit detects key points of the whole face range and acquires a local face range according to the positions of the key points;
a correction unit that corrects the local face range and the entire face range;
the splicing unit splices the local face range and the whole face range to obtain a final face sample;
and the characteristic extraction unit is used for carrying out characteristic extraction on the final human face sample through the neural network to obtain the first human face characteristic and storing the first human face characteristic into the database.
7. The matching system of claim 5, wherein the obtain human face features module comprises:
the face extraction unit is used for acquiring the second face picture and extracting second face features of each face from the second face picture;
and the comparison unit is used for comparing the face feature distances of the second face features with the first face features to obtain the first face features with the shortest distances.
8. The matching system of claim 5, wherein the second face picture is an occluded face picture.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the matching method according to any one of claims 1 to 4 when executing the computer program.
10. A storage medium on which a computer program is stored which, when being executed by a processor, carries out the matching method according to any one of claims 1 to 4.
CN202110848435.1A 2021-07-27 2021-07-27 Local face matching method and system, storage medium and electronic equipment Pending CN113657457A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110848435.1A CN113657457A (en) 2021-07-27 2021-07-27 Local face matching method and system, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110848435.1A CN113657457A (en) 2021-07-27 2021-07-27 Local face matching method and system, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113657457A true CN113657457A (en) 2021-11-16

Family

ID=78490675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110848435.1A Pending CN113657457A (en) 2021-07-27 2021-07-27 Local face matching method and system, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113657457A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991412A (en) * 2019-12-20 2020-04-10 北京百分点信息科技有限公司 Face recognition method and device, storage medium and electronic equipment
CN111626243A (en) * 2020-05-28 2020-09-04 上海锘科智能科技有限公司 Identity recognition method and device for face covered by mask and storage medium
CN112149571A (en) * 2020-09-24 2020-12-29 深圳龙岗智能视听研究院 Face recognition method based on neural network affine transformation
CN112232117A (en) * 2020-09-08 2021-01-15 深圳微步信息股份有限公司 Face recognition method, face recognition device and storage medium
CN112257503A (en) * 2020-09-16 2021-01-22 深圳微步信息股份有限公司 Sex age identification method, device and storage medium
CN112487886A (en) * 2020-11-16 2021-03-12 北京大学 Method and device for identifying face with shielding, storage medium and terminal
CN113158883A (en) * 2021-04-19 2021-07-23 汇纳科技股份有限公司 Face recognition method, system, medium and terminal based on regional attention

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991412A (en) * 2019-12-20 2020-04-10 北京百分点信息科技有限公司 Face recognition method and device, storage medium and electronic equipment
CN111626243A (en) * 2020-05-28 2020-09-04 上海锘科智能科技有限公司 Identity recognition method and device for face covered by mask and storage medium
CN112232117A (en) * 2020-09-08 2021-01-15 深圳微步信息股份有限公司 Face recognition method, face recognition device and storage medium
CN112257503A (en) * 2020-09-16 2021-01-22 深圳微步信息股份有限公司 Sex age identification method, device and storage medium
CN112149571A (en) * 2020-09-24 2020-12-29 深圳龙岗智能视听研究院 Face recognition method based on neural network affine transformation
CN112487886A (en) * 2020-11-16 2021-03-12 北京大学 Method and device for identifying face with shielding, storage medium and terminal
CN113158883A (en) * 2021-04-19 2021-07-23 汇纳科技股份有限公司 Face recognition method, system, medium and terminal based on regional attention

Similar Documents

Publication Publication Date Title
US11107225B2 (en) Object recognition device and computer readable storage medium
CN110705405B (en) Target labeling method and device
CN108829900B (en) Face image retrieval method and device based on deep learning and terminal
CN109325964B (en) Face tracking method and device and terminal
CN109284729B (en) Method, device and medium for acquiring face recognition model training data based on video
CN111368685B (en) Method and device for identifying key points, readable medium and electronic equipment
CN110428399B (en) Method, apparatus, device and storage medium for detecting image
CN105160318A (en) Facial expression based lie detection method and system
CN111063079B (en) Binocular living body face detection method and device based on access control system
CN108108711B (en) Face control method, electronic device and storage medium
CN110489951A (en) Method, apparatus, computer equipment and the storage medium of risk identification
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
CN112364827A (en) Face recognition method and device, computer equipment and storage medium
CN110532746B (en) Face checking method, device, server and readable storage medium
CN109815823B (en) Data processing method and related product
WO2021249006A1 (en) Method and apparatus for identifying authenticity of facial image, and medium and program product
CN113221771A (en) Living body face recognition method, living body face recognition device, living body face recognition equipment, storage medium and program product
CN113158773B (en) Training method and training device for living body detection model
CN109308704B (en) Background eliminating method, device, computer equipment and storage medium
CN111325078A (en) Face recognition method, face recognition device and storage medium
CN110688878A (en) Living body identification detection method, living body identification detection device, living body identification detection medium, and electronic device
CN113627387A (en) Parallel identity authentication method, device, equipment and medium based on face recognition
CN109598201B (en) Action detection method and device, electronic equipment and readable storage medium
CN113657457A (en) Local face matching method and system, storage medium and electronic equipment
CN111898529B (en) Face detection method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination