CN116844199A - Face recognition method, device and equipment - Google Patents

Face recognition method, device and equipment Download PDF

Info

Publication number
CN116844199A
CN116844199A CN202310658487.1A CN202310658487A CN116844199A CN 116844199 A CN116844199 A CN 116844199A CN 202310658487 A CN202310658487 A CN 202310658487A CN 116844199 A CN116844199 A CN 116844199A
Authority
CN
China
Prior art keywords
face
face image
class center
feature
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310658487.1A
Other languages
Chinese (zh)
Inventor
赵晨旭
孔紫剑
唐大闰
姜平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Minglue Zhaohui Technology Co Ltd
Original Assignee
Beijing Minglue Zhaohui Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Minglue Zhaohui Technology Co Ltd filed Critical Beijing Minglue Zhaohui Technology Co Ltd
Priority to CN202310658487.1A priority Critical patent/CN116844199A/en
Publication of CN116844199A publication Critical patent/CN116844199A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application discloses a face recognition method, which can obtain target class center features corresponding to a face image to be recognized by matching face feature information of the face image to be recognized with each class center feature in a preset face feature library; determining the face recognition result of the face image to be recognized according to the target class center characteristics; therefore, the application can obtain the effects of higher recognition rate and higher recognition speed by directly matching the face feature information of the input face image to be recognized with each class of center features in the preset face feature library.

Description

Face recognition method, device and equipment
Technical Field
The application relates to the technical field of image processing, in particular to a face recognition method, a face recognition device and face recognition equipment.
Background
At present, a face access control system based on a gate machine can be influenced by factors such as performance, illumination or illumination environment, biological characteristics such as (face gesture, expression) or object shielding of face snapshot machine equipment, and the like under an actual application scene, so that an input face image is complex, noise signals are more, and a face recognition result is inaccurate or a face cannot be recognized. In addition, the face recognition process of the face access control system generally comprises the step of comparing the captured face image with the face images stored in the face management library, so as to determine the face recognition result.
However, the method not only increases the time of face recognition, but also increases the false recognition rate of the face recognition very possibly because a plurality of face images exist in the face management library to be compared. Therefore, a face recognition method with higher recognition rate and higher recognition speed is needed.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview, and is intended to neither identify key/critical elements nor delineate the scope of such embodiments, but is intended as a prelude to the more detailed description that follows.
The embodiment of the disclosure provides a face recognition method, a face recognition device and face recognition equipment, so that the effects of higher recognition rate and higher recognition speed can be achieved by directly matching the face feature information of an input face image to be recognized with each class of center features in a preset face feature library.
In some embodiments, the face recognition method includes:
acquiring a face image to be identified;
according to the face image to be identified, face characteristic information of the face image to be identified is determined;
matching the face feature information of the face image to be identified with each class center feature in a preset face feature library to obtain a target class center feature corresponding to the face image to be identified;
and determining the face recognition result of the face image to be recognized according to the target class center characteristics.
In some embodiments, the determining the face feature information of the face image to be identified according to the face image to be identified includes:
and inputting the face image to be recognized into a preset face recognition model to obtain face characteristic information of the face image to be recognized.
In some embodiments, one class center feature in the preset face feature library corresponds to one user identifier, and the user identifiers corresponding to each class center feature are different.
In some embodiments, the generating manner of the class center feature in the preset face feature library includes:
acquiring a face image corresponding to a user identifier;
extracting the class center characteristics of the face image by using a preset face recognition model;
and taking the class center features of the face image as class center features in the preset face feature library.
In some embodiments, the extracting the class center feature of the face image by using a preset face recognition model includes:
and inputting the face image into the preset face recognition model, and taking the weight w of the last full-connection layer in the face recognition model as the class center characteristic of the face image.
In some embodiments, the face recognition model is an arcface model.
In some embodiments, the matching the face feature information of the face image to be identified with each class center feature in a preset face feature library to obtain a target class center feature corresponding to the face image to be identified includes:
calculating the distance between the face feature information of the face image to be recognized and each class of center features in the preset face feature library;
and determining the target class center characteristic corresponding to the face image to be identified according to the class center characteristic of which the distance meets the preset condition.
In some embodiments, the determining the face recognition result of the face image to be recognized according to the target class center feature includes:
and taking the user identification corresponding to the target class center characteristic as a face recognition result of the face image to be recognized.
In some embodiments, the face recognition device comprises a processor and a memory storing program instructions, the processor being configured to perform a face recognition method as described above when executing the program instructions.
In some embodiments, the apparatus comprises the face recognition device described above.
The face recognition method, device and equipment provided by the embodiment of the disclosure can realize the following technical effects: according to the application, the face characteristic information of the face image to be identified is matched with each class center characteristic in the preset face characteristic library, so that the target class center characteristic corresponding to the face image to be identified is obtained; determining the face recognition result of the face image to be recognized according to the target class center characteristics; therefore, the application can obtain the effects of higher recognition rate and higher recognition speed by directly matching the face feature information of the input face image to be recognized with each class of center features in the preset face feature library.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which like reference numerals refer to similar elements, and in which:
fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the disclosure;
fig. 2 is a schematic flow chart of a face recognition method according to an embodiment of the disclosure;
fig. 3 is a schematic diagram of a face recognition device according to an embodiment of the present disclosure.
Detailed Description
So that the manner in which the features and techniques of the disclosed embodiments can be understood in more detail, a more particular description of the embodiments of the disclosure, briefly summarized below, may be had by reference to the appended drawings, which are not intended to be limiting of the embodiments of the disclosure. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may still be practiced without these details. In other instances, well-known structures and devices may be shown simplified in order to simplify the drawing.
The terms first, second and the like in the description and in the claims of the embodiments of the disclosure and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe embodiments of the present disclosure. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion.
The term "plurality" means two or more, unless otherwise indicated.
In the embodiment of the present disclosure, the character "/" indicates that the front and rear objects are an or relationship. For example, A/B represents: a or B.
The term "and/or" is an associative relationship that describes an object, meaning that there may be three relationships. For example, a and/or B, represent: a or B, or, A and B.
Referring to fig. 1, an embodiment of the disclosure provides a face recognition method, including:
s101: and acquiring a face image to be identified.
In this embodiment, the face image to be identified may be understood as a face image that needs to be identified by the user. In one implementation, the face image to be identified may be acquired based on an access control system, for example, the access control system includes a camera, and the face image of the user may be acquired through the camera and used as the face image to be identified.
S102: and determining face characteristic information of the face image to be recognized according to the face image to be recognized.
In this embodiment, after the face image to be recognized is obtained, face feature information of the face image to be recognized may be extracted first, so that face recognition may be performed by using the face feature information of the face image to be recognized later.
In one implementation manner, the face image to be identified may be input into a preset face identification model, so as to obtain face feature information of the face image to be identified. The face recognition model may be an arcface model. As shown in fig. 2, a face image to be identified (i.e., a test face image) may be input into an arcface model, so as to obtain face feature information of the face image to be identified.
S103: and matching the face feature information of the face image to be identified with each class center feature in a preset face feature library to obtain a target class center feature corresponding to the face image to be identified.
In this embodiment, the preset face feature library may store a plurality of class center features, where one class center feature in the preset face feature library corresponds to one user identifier, and the user identifiers corresponding to each class center feature are different.
Next, a generation mode of the class center feature in the preset face feature library is introduced. Specifically, a face image corresponding to a user identifier may be obtained first; for example, as shown in fig. 2, the face base may be classified according to the user identity (i.e. the user identity), and a suitable threshold is set by using a face quality module, so that high-quality face images may be screened from the face base, so that a high-quality face image test base may be built according to the user identity, where the face images include semantic information such as high resolution, clear face texture details, and a front face. Therefore, the face feature library can be constructed through the high-quality face image, compared with the existing method, the method is more convenient for information storage and feature matching, does not need to carry out a large amount of maintenance and iterative update on the existing data in the face feature library, and has more obvious advantages in face recognition speed and accuracy.
Then, a preset face recognition model (such as an arcface model) may be used to extract a class center feature of the face image, for example, the face image may be input into the preset face recognition model, and the weight w of the last full connection layer in the face recognition model is used as the class center feature of the face image. Then, the class center feature of the face image can be used as a class center feature in the preset face feature library. It should be noted that, in this embodiment, the intra-class compactness and inter-class dispersibility of the arcface model are utilized, and a preset face feature library using the class center feature of the face as the identity (i.e., the user identity) is established, so as to facilitate quick feature matching (i.e., the matching of the face feature information of the face image to be recognized and each class center feature in the preset face feature library), thereby saving the process of traversing the preset face feature library and extracting features in the prior art, and avoiding reliance on the quality of registered face images, so that the face recognition accuracy can be effectively improved and the recognition time can be reduced.
In this embodiment, after the face feature information of the face image to be identified is obtained, the face feature information of the face image to be identified may be matched with each class center feature in a preset face feature library, that is, the face feature information of the face image to be identified is compared with each class center feature in the preset face feature library, so as to obtain the target class center feature corresponding to the face image to be identified. Specifically, the distances between the face feature information of the face image to be recognized and each type of center feature in the preset face feature library can be calculated first, and it is required to be explained that if the distance between the face feature information of the face image to be recognized and the type of center feature is smaller, the more similar the face image to be recognized is, otherwise, if the distance between the face feature information of the face image to be recognized and the type of center feature is larger, the more dissimilar the face image to be recognized is; then, the target class center feature corresponding to the face image to be identified may be determined according to the class center feature whose distance satisfies a preset condition (for example, the distance is minimum or the distance is smaller than a preset distance threshold), for example, the class center feature whose distance satisfies the preset condition may be used as the target class center feature corresponding to the face image to be identified.
S104: and determining the face recognition result of the face image to be recognized according to the target class center characteristics.
In this embodiment, after the target class center feature corresponding to the face image to be identified is determined, the face identification result of the face image to be identified may be determined based on the target class center feature corresponding to the face image to be identified. Specifically, the user identifier corresponding to the target class center feature may be used as a face recognition result of the face image to be recognized. In one implementation manner, the face image to be identified may be acquired based on an access control system, for example, the access control system includes a camera, the face image of the user may be acquired through the camera, and the face image is used as the face image to be identified, so that the user identifier corresponding to the target class center feature corresponding to the face image to be identified may be used as the face identification result of the face image to be identified, and thus, the access control system successfully identifies the user identifier of the face image to be identified, that is, the user identity.
The face recognition method, device and equipment provided by the embodiment of the disclosure can realize the following technical effects: according to the application, the face characteristic information of the face image to be identified is matched with each class center characteristic in the preset face characteristic library, so that the target class center characteristic corresponding to the face image to be identified is obtained; determining the face recognition result of the face image to be recognized according to the target class center characteristics; therefore, the application can obtain the effects of higher recognition rate and higher recognition speed by directly matching the face feature information of the input face image to be recognized with each class of center features in the preset face feature library.
As shown in connection with fig. 3, an embodiment of the present disclosure provides a face recognition apparatus including a processor (processor) 100 and a memory (memory) 101 storing program instructions. Optionally, the apparatus may further comprise a communication interface (Communication Interface) 102 and a bus 103. The processor 100, the communication interface 102, and the memory 101 may communicate with each other via the bus 103. The communication interface 102 may be used for information transfer. The processor 100 may call the program instructions in the memory 101 to perform the face recognition method of the above embodiment.
Further, the program instructions in the memory 101 described above may be implemented in the form of software functional units and sold or used as a separate product, and may be stored in a computer-readable storage medium.
The memory 101 is a computer readable storage medium that can be used to store a software program, a computer executable program, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 100 executes the functional application and the data processing by executing the program instructions/modules stored in the memory 101, i.e. implements the face recognition method in the above embodiment.
The memory 101 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created according to the use of the terminal device, etc. Further, the memory 101 may include a high-speed random access memory, and may also include a nonvolatile memory.
By adopting the face recognition device provided by the embodiment of the disclosure, the effects of higher recognition rate and higher recognition speed can be obtained by directly matching the input face feature information of the face image to be recognized with each class of center features in the preset face feature library.
The embodiment of the disclosure provides equipment comprising the face recognition device.
The equipment can obtain the effects of higher recognition rate and higher recognition speed by directly matching the input face feature information of the face image to be recognized with each class of center features in the preset face feature library.
The disclosed embodiments provide a computer-readable storage medium storing computer-executable instructions configured to perform the above-described face recognition method.
The disclosed embodiments provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the above-described face recognition method.
The computer readable storage medium may be a transitory computer readable storage medium or a non-transitory computer readable storage medium.
Embodiments of the present disclosure may be embodied in a software product stored on a storage medium, including one or more instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of a method of embodiments of the present disclosure. And the aforementioned storage medium may be a non-transitory storage medium including: a plurality of media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or a transitory storage medium.
The above description and the drawings illustrate embodiments of the disclosure sufficiently to enable those skilled in the art to practice them. Other embodiments may involve structural, logical, electrical, process, and other changes. The embodiments represent only possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in, or substituted for, those of others. Moreover, the terminology used in the present application is for the purpose of describing embodiments only and is not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a," "an," and "the" (the) are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this disclosure is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, when used in the present disclosure, the terms "comprises," "comprising," and/or variations thereof, mean that the recited features, integers, steps, operations, elements, and/or components are present, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Without further limitation, an element defined by the phrase "comprising one …" does not exclude the presence of other like elements in a process, method or apparatus comprising such elements. In this context, each embodiment may be described with emphasis on the differences from the other embodiments, and the same similar parts between the various embodiments may be referred to each other. For the methods, products, etc. disclosed in the embodiments, if they correspond to the method sections disclosed in the embodiments, the description of the method sections may be referred to for relevance.
Those of skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. The skilled artisan may use different methods for each particular application to achieve the described functionality, but such implementation should not be considered to be beyond the scope of the embodiments of the present disclosure. It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the embodiments disclosed herein, the disclosed methods, articles of manufacture (including but not limited to devices, apparatuses, etc.) may be practiced in other ways. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the units may be merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form. The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to implement the present embodiment. In addition, each functional unit in the embodiments of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than that disclosed in the description, and sometimes no specific order exists between different operations or steps. For example, two consecutive operations or steps may actually be performed substantially in parallel, they may sometimes be performed in reverse order, which may be dependent on the functions involved. Each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Claims (10)

1. A face recognition method, comprising:
acquiring a face image to be identified;
according to the face image to be identified, face characteristic information of the face image to be identified is determined;
matching the face feature information of the face image to be identified with each class center feature in a preset face feature library to obtain a target class center feature corresponding to the face image to be identified;
and determining the face recognition result of the face image to be recognized according to the target class center characteristics.
2. The method according to claim 1, wherein the determining face feature information of the face image to be recognized according to the face image to be recognized includes:
and inputting the face image to be recognized into a preset face recognition model to obtain face characteristic information of the face image to be recognized.
3. The method of claim 1, wherein one class center feature in the preset face feature library corresponds to one user identifier, and the user identifiers corresponding to each class center feature are different.
4. A method according to claim 3, wherein the generating manner of the class center feature in the preset face feature library includes:
acquiring a face image corresponding to a user identifier;
extracting the class center characteristics of the face image by using a preset face recognition model;
and taking the class center features of the face image as class center features in the preset face feature library.
5. The method of claim 4, wherein extracting the class center feature of the face image using a predetermined face recognition model comprises:
and inputting the face image into the preset face recognition model, and taking the weight w of the last full-connection layer in the face recognition model as the class center characteristic of the face image.
6. The method of claim 2 or 4, wherein the face recognition model is an arcface model.
7. The method according to claim 3, wherein the matching the face feature information of the face image to be identified with each class center feature in a preset face feature library to obtain the target class center feature corresponding to the face image to be identified includes:
calculating the distance between the face feature information of the face image to be recognized and each class of center features in the preset face feature library;
and determining the target class center characteristic corresponding to the face image to be identified according to the class center characteristic of which the distance meets the preset condition.
8. The method according to claim 7, wherein determining the face recognition result of the face image to be recognized according to the target class center feature includes:
and taking the user identification corresponding to the target class center characteristic as a face recognition result of the face image to be recognized.
9. A face recognition device comprising a processor and a memory storing program instructions, wherein the processor is configured, when executing the program instructions, to perform the face recognition method of any one of claims 1 to 8.
10. An apparatus comprising a face recognition device as claimed in claim 9.
CN202310658487.1A 2023-06-05 2023-06-05 Face recognition method, device and equipment Pending CN116844199A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310658487.1A CN116844199A (en) 2023-06-05 2023-06-05 Face recognition method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310658487.1A CN116844199A (en) 2023-06-05 2023-06-05 Face recognition method, device and equipment

Publications (1)

Publication Number Publication Date
CN116844199A true CN116844199A (en) 2023-10-03

Family

ID=88159029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310658487.1A Pending CN116844199A (en) 2023-06-05 2023-06-05 Face recognition method, device and equipment

Country Status (1)

Country Link
CN (1) CN116844199A (en)

Similar Documents

Publication Publication Date Title
CN106446816B (en) Face recognition method and device
EP3229171A1 (en) Method and device for determining identity identifier of human face in human face image, and terminal
CN110019876B (en) Data query method, electronic device and storage medium
CN110019891B (en) Image storage method, image retrieval method and device
KR102592551B1 (en) Object recognition processing apparatus and method for ar device
GB2501224A (en) Generating and comparing video signatures using sets of image features
US20170323149A1 (en) Rotation invariant object detection
CN108596079B (en) Gesture recognition method and device and electronic equipment
CN111368867A (en) Archive classification method and system and computer readable storage medium
CN111177436A (en) Face feature retrieval method, device and equipment
JP5192437B2 (en) Object region detection apparatus, object region detection method, and object region detection program
JP6598480B2 (en) Image processing apparatus, image processing method, and program
CN109377444B (en) Image processing method, device, computer equipment and storage medium
CN112052251B (en) Target data updating method and related device, equipment and storage medium
CN112257689A (en) Training and recognition method of face recognition model, storage medium and related equipment
CN116844199A (en) Face recognition method, device and equipment
CN112183161A (en) Method, device and equipment for processing face database
CN105224957A (en) A kind of method and system of the image recognition based on single sample
CN113518058B (en) Abnormal login behavior detection method and device, storage medium and computer equipment
CN111079551B (en) Finger vein recognition method and device based on singular value decomposition and storage medium
JP6341843B2 (en) Image search apparatus and image search system
CN111160314B (en) Violent sorting identification method and device
CN110287943B (en) Image object recognition method and device, electronic equipment and storage medium
CN114547182A (en) Personnel information synchronization method, terminal device and storage medium
CN112989096B (en) Face feature migration method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination