CN113033244A - Face recognition method, device and equipment - Google Patents

Face recognition method, device and equipment Download PDF

Info

Publication number
CN113033244A
CN113033244A CN201911251194.1A CN201911251194A CN113033244A CN 113033244 A CN113033244 A CN 113033244A CN 201911251194 A CN201911251194 A CN 201911251194A CN 113033244 A CN113033244 A CN 113033244A
Authority
CN
China
Prior art keywords
segmentation
face
face image
face recognition
unoccluded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911251194.1A
Other languages
Chinese (zh)
Inventor
曾郁凯
颜国明
张有贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhangzhou Lidaxin Optoelectronic Technology Co ltd
Original Assignee
Zhangzhou Lidaxin Optoelectronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhangzhou Lidaxin Optoelectronic Technology Co ltd filed Critical Zhangzhou Lidaxin Optoelectronic Technology Co ltd
Priority to CN201911251194.1A priority Critical patent/CN113033244A/en
Publication of CN113033244A publication Critical patent/CN113033244A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The face recognition method comprises the following steps: collecting a face image of a user; when the face image is detected to be partially shielded, segmenting the face image according to a preset face image segmentation mode to obtain a plurality of segmentation areas; comparing the characteristics of the unoccluded divided areas in the plurality of divided areas with the preset divided areas of the registered users to obtain the similarity of the unoccluded divided areas; and determining a face recognition result according to the similarity of the unoccluded segmentation region. Therefore, the face recognition can still be effectively carried out when the face is partially shielded, and the face recognition accuracy when the face is partially shielded can be favorably improved through the similarity contrast of the features of the unshielded segmentation region.

Description

Face recognition method, device and equipment
Technical Field
The present application belongs to the field of face recognition, and in particular, to a face recognition method, apparatus and device.
Background
Face recognition is an image analysis technique for performing authority judgment based on face images. The method comprises the steps of extracting face features in a face image by collecting the face image of a user, comparing the extracted face features with prestored face features, and determining whether the face features pass verification according to a comparison result. Because the face recognition mode is simple to operate, and the result is visual and the concealment is good, the face recognition method is widely applied to the fields of entrance guard, intelligent equipment, commercial intelligent application and the like.
When the unlocking is performed by using the face recognition method, the situation that the face is partially shielded by an object may occur, for example, when a user wears glasses, a hat or a mask, a complete face image cannot be acquired, and the face image cannot be recognized as a misjudgment.
Disclosure of Invention
In view of this, embodiments of the present application provide a method, an apparatus, and a device for face recognition, so as to solve the problem that, when face recognition is performed in the prior art, if a face is partially occluded by an object, a complete face image cannot be acquired, and recognition often cannot be performed as misjudgment.
A first aspect of an embodiment of the present application provides a face recognition method, where the face recognition method includes:
collecting a face image of a user;
when the face image is detected to be partially shielded, segmenting the face image according to a preset face image segmentation mode to obtain a plurality of segmentation areas;
comparing the characteristics of the unoccluded divided areas in the plurality of divided areas with the preset divided areas of the registered users to obtain the similarity of the unoccluded divided areas;
and determining a face recognition result according to the similarity of the unoccluded segmentation region.
With reference to the first aspect, in a first possible implementation manner of the first aspect, when it is detected that the face image is partially occluded, the segmenting the face image according to a preset face image segmentation manner includes:
when the face image is detected to be partially shielded, acquiring the type of a shielding object shielding the face at present;
searching a segmentation mode corresponding to the current obstruction type according to the corresponding relation between the preset obstruction type and the segmentation mode;
and segmenting the human face according to the searched segmentation mode.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the type of the blocking object of the face includes one or more of a mask blocking device, a hat blocking device, a glasses blocking device, a telephone blocking device, or a hair blocking device.
With reference to the first aspect, in a third possible implementation manner of the first aspect, the determining a face recognition result according to the similarity of the non-occluded segmented regions includes:
when the unoccluded segmentation areas comprise two or more than two segmentation areas, detecting whether the similarity of each unoccluded segmentation area is greater than a preset value;
if the similarity of each unoccluded segmentation area is greater than a preset value, confirming that the face recognition is successful;
or if the similarity of each non-shielded segmentation area is greater than a preset value, and the orientation features of the non-shielded segmentation areas are consistent with the preset orientation features of the registered user, confirming that the face recognition is successful.
With reference to the first aspect, in a fourth possible implementation manner of the first aspect, the step of comparing features of an unoccluded segmented area of the plurality of segmented areas with segmented areas of a preset registered user to obtain a similarity of the unoccluded segmented area includes:
obtaining the type of a shelter for shielding the face currently;
obtaining an effective segmentation area corresponding to the current obstruction type according to the corresponding relation between the preset obstruction type and the effective segmentation area;
and comparing the characteristics of the effective segmentation areas with segmentation areas corresponding to preset registered users to obtain the similarity of the effective segmentation areas.
With reference to the first aspect, in a fifth possible implementation manner of the first aspect, before the step of comparing features of an unoccluded segmented area in the plurality of segmented areas with segmented areas of a preset registered user to obtain a similarity of the unoccluded segmented area, the method further includes:
acquiring a face image of a registered user, and extracting face features of the face image of the registered user;
dividing the face image into a plurality of segmentation areas according to one or more preset face image segmentation modes.
With reference to the first aspect, the first possible implementation manner of the first aspect, the second possible implementation manner of the first aspect, the third possible implementation manner of the first aspect, the fourth possible implementation manner of the first aspect, or the fifth possible implementation manner of the first aspect, in a sixth possible implementation manner of the first aspect, the segmentation region includes one or more of an eyebrow region, an eye region, a face contour region, a nose region, and a mouth region.
A second aspect of an embodiment of the present application provides a face recognition apparatus, including:
the face image acquisition unit is used for acquiring a face image of a user;
the segmentation unit is used for segmenting the face image according to a preset face image segmentation mode to obtain a plurality of segmentation areas when the face image is detected to be partially shielded;
the similarity calculation unit is used for comparing the characteristics of the unoccluded divided areas in the plurality of divided areas with the preset divided areas of the registered users to obtain the similarity of the unoccluded divided areas;
and the face recognition result determining unit is used for determining a face recognition result according to the similarity of the unoccluded segmentation areas.
A third aspect of embodiments of the present application provides a face recognition device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the face recognition method according to any one of the first aspect when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of the face recognition method according to any one of the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: when the face image is detected to be partially shielded, the acquired face image is segmented according to a preset segmentation mode to obtain an unshielded segmentation area, the unshielded segmentation area is compared with the segmentation area of the registered user to obtain the similarity of the unshielded segmentation area, and a face recognition result is determined according to the similarity of the unshielded segmentation area, so that the face recognition can still be effectively carried out when the face is partially shielded, and the face recognition accuracy when the face is partially shielded can be improved by comparing the similarity of the features of the unshielded segmentation area.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation process of a face recognition method according to an embodiment of the present application;
fig. 2 is a schematic diagram of distribution of feature points of a face image according to an embodiment of the present application;
fig. 3 is a schematic diagram of a segmentation region obtained by segmenting a face image according to an embodiment of the present application;
FIG. 4 is a schematic view of a face image segmentation corresponding to a mask blocker according to an embodiment of the present disclosure;
fig. 5 is a schematic implementation flow chart for comparing the divided regions according to an embodiment of the present application;
fig. 6 is a schematic diagram of a face recognition apparatus according to an embodiment of the present application;
fig. 7 is a schematic diagram of a face recognition device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Fig. 1 is a schematic view of an implementation flow of a face recognition method provided in an embodiment of the present application, which is detailed as follows:
in step S101, a face image of a user is acquired;
specifically, the face image can be detected through a video frame in the video, and if the face image is detected to be included in the video frame, the video frame is extracted, and the face image is further extracted from the extracted video frame.
In a preferred embodiment, when it is detected that a video frame includes a face image, the focal length of the video acquisition device may be adjusted according to the position of the face image in the video frame, so that the video acquisition device may focus on the face image area, and obtain a clearer face image after focusing. Or when the range of the image collected by the image collecting equipment is large, the direction of the light can be adjusted according to the position of the face image, so that the light at the position of the face image is more sufficient, and the definition of the collected face image is improved.
When the human face image is detected to be included in the video frame, it may be determined in advance whether the human face image is included in the video frame through human outline comparison, and if the human face image is included in the video frame, it may be further determined whether the features of the human face organ, such as whether the human face organ, such as the human eyes, the nose, the mouth, and the like, is included in the video frame. Namely, the matching comparison can be performed through the organ shapes, and when one or more face organs are included in the video frame, it is confirmed that the currently acquired video frame includes the face image.
In step S102, when it is detected that the face image is partially occluded, segmenting the face image according to a preset face image segmentation mode to obtain a plurality of segmentation areas;
after the face image is collected, whether the face image is a front face image or not can be judged according to the face features. After the face image is determined to be the front face image, whether the current face image is partially shielded or not can be determined according to the number of organs displayed in the face image.
When the face image is partially shielded, the face image can be divided into face images comprising different types of shielding objects according to the types of the shielding objects. For example, one or more of a mask shield, a hat shield, a glasses shield, a phone shield, or a hair shield may be included. For example, a user may be wearing a mask, making a phone call, etc.
In the face image blocked by the mask, according to the difference of the size or the shape of the mask, the blocked area in the face image comprises a mouth area, or possibly comprises a nose area and a face outline area. For example, when the mask is a large type mask, the blocked region includes a mouth region, a nose region, and a face contour region.
In the face image blocked by the hat, the eyebrow area of the user or the eyebrow area and the eye area of the user may be blocked due to the influence of the installation position of the camera and the size of the brim. In this case, feature acquisition and comparison may be performed on the nose region, face contour region, mouth region of the user.
The glasses shielding can shield the eye area of the user or shield the eye area and the eyebrow area of the user and the like according to the size of the glasses. The phone occlusion may occlude a face contour region or a mouth region, etc., depending on the angle of view of the shot. The phone cover or hair cover may cover the face contour or mouth of the user.
As a preferred embodiment of the present application, a segmentation method of a corresponding face image may be determined according to a current type of an obstruction, and a segmentation operation may be performed on the face image according to the determined segmentation method of the face image.
For example, when the type of the blocking object is a mask blocking object, the corresponding segmentation method of the face image may segment the image into an eye region, a nose region, a mouth region, a face contour region, and the like. Feature comparison can be performed directly from the unobstructed eye area. When the type of the blocking object is glasses blocking or cap blocking, the face image can be divided into an eyebrow region, an eye region, a nose region, a mouth region, a face contour region and the like. When the type of the obstruction is telephone obstruction or hair obstruction, the face image can be divided into five sense organ regions and face contour regions. Different shielding types are divided by adopting different characteristic regions, so that the divided characteristic regions are subjected to effective characteristic comparison conveniently.
For example, fig. 2 is a schematic distribution diagram of feature points of a registered face image, where the feature points of the face image are concentrated at positions of eyes, eyebrows, nose, mouth, face contour, and the like. When it is detected that the type of the blocking object is a mask blocking object, the face image may be divided into an eye region (left eye region and right eye region), a nose region, a mouth region, and a face contour region (left face contour region and right face contour region) according to the division manner shown in fig. 3.
After the face image to be recognized is obtained, feature points of the face image to be recognized can be extracted, and the segmentation region to which the extracted feature points belong is determined according to the determined segmentation region dividing mode. For example, fig. 4 is a schematic diagram of distinguishing and dividing a face image blocked by a mask, and according to the schematic diagram of dividing regions shown in fig. 4, it is determined that the unblocked divided region is an eye region, which includes a left eye region and a right eye region.
In step S103, comparing features of an unobstructed divided area of the plurality of divided areas with preset divided areas of registered users to obtain a similarity of the unobstructed divided areas;
after obtaining the non-shielded segmentation area, carrying out corresponding feature comparison on the non-shielded segmentation area and the segmentation area of the registered face image, thereby determining the similarity between the non-shielded segmentation area and the segmentation area of the registered face image.
For example, after the acquired face image shown in fig. 4 is segmented, feature comparison is performed on the non-occluded segmented regions including the left eye region and the right eye region with the eye regions of the pre-stored segmented regions, and whether the currently acquired face image is similar to the face image of the registered user is determined according to the similarity of the feature comparison, for example, the similarity of the left eye region and the right eye region in fig. 4 is 95%.
As a preferred embodiment of the present application, the step of comparing the features of the non-occluded divided areas in the plurality of divided areas with the features of the divided areas of the preset registered user to obtain the similarity of the non-occluded divided areas may include, as shown in fig. 5:
in step S501, a type of a blocking object currently blocking a face is obtained;
the type of the obstruction can be identified according to the shape and other characteristics of the obstruction. For example, the identified type of obstruction may include one or more of a mask obstruction, a hat obstruction, a glasses obstruction, a phone obstruction, or a hair obstruction.
In step S502, an effective segmentation area corresponding to the current type of the obstruction is obtained according to the correspondence between the preset type of the obstruction and the effective segmentation area;
and directly searching the effective segmentation area corresponding to the current obstruction type according to the corresponding relation between the preset obstruction type and the effective segmentation area, and comparing the characteristic points of the effective segmentation area with the corresponding segmentation area of the registered face image. For example, when the type of the blocking object is a mask blocking object, the eye region is extracted as an effective segmentation region, and feature comparison can be directly performed on the eye region.
In a preferred embodiment, the registered face image may be divided into the divided regions according to the current type of the obstruction, and the effective divided region corresponding to the type of the obstruction is determined according to the dividing manner corresponding to the type of the obstruction. The same segmentation areas obtained by the same segmentation mode are compared, so that the accuracy of feature comparison is improved.
In step S503, feature comparison is performed according to the valid segmentation region and a segmentation region corresponding to a preset registered user, so as to obtain a similarity of the valid segmentation region.
The collected face image and the face image registered in advance can be divided into the segmentation areas according to the type of the shielding object of the currently collected face image, so that the matched segmentation areas can be obtained, and the feature comparison can be better carried out on the segmentation areas.
Through adopting different segmentation modes, the characteristic region can be kept more completely, for example, when eyes are shielded, the eyebrow region can be divided, the characteristic comparison can be directly carried out on the eyebrow region conveniently, so that more effective characteristic comparison segmentation regions can be obtained, and the accuracy of the characteristic comparison is improved.
In step S104, a face recognition result is determined according to the similarity of the non-occluded segmented regions.
When the face recognition result is determined according to the similarity of the non-shielded segmented regions, the face recognition success can be confirmed directly according to the similarity of the non-shielded segmented regions, and if the similarity of each segmented region is greater than a preset value. For example, when the acquired face image is covered by a mask, as shown in fig. 4, according to the similarity of the eye region being greater than the predetermined value, it can be determined that the acquired face image is successfully recognized, for example, the door access can be unlocked, or the intelligent device can be unlocked.
Or, in a further optimized implementation, if the similarity of each non-blocked partitioned area is greater than a predetermined value, and the orientation features of the non-blocked partitioned areas match the preset orientation features of the registered user, that is, after comparing the features of the partitioned areas, the relative relationship of the overall features is also included for comparison, thereby improving the comparison accuracy.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 6 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present application, and as shown in fig. 6, the face recognition apparatus includes:
a face image acquisition unit 601, configured to acquire a face image of a user;
a segmentation unit 602, configured to, when it is detected that the face image is partially covered, segment the face image according to a preset face image segmentation manner to obtain a plurality of segmentation areas;
a similarity calculation unit 603, configured to perform feature comparison on an unshielded segmented region in the plurality of segmented regions with a segmented region of a preset registered user, so as to obtain a similarity of the unshielded segmented region;
a face recognition result determining unit 604, configured to determine a face recognition result according to the similarity of the non-occluded segmented regions.
The face recognition apparatus corresponds to the face recognition method described in fig. 1.
Fig. 7 is a schematic diagram of a face recognition device according to an embodiment of the present application. As shown in fig. 7, the face recognition apparatus 7 of this embodiment includes: a processor 70, a memory 71 and a computer program 72, such as a face recognition program, stored in said memory 71 and operable on said processor 70. The processor 70, when executing the computer program 72, implements the steps in the various face recognition method embodiments described above. Alternatively, the processor 70 implements the functions of the modules/units in the above-described device embodiments when executing the computer program 72.
Illustratively, the computer program 72 may be partitioned into one or more modules/units that are stored in the memory 71 and executed by the processor 70 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 72 in the face recognition device 7. For example, the computer program 72 may be divided into:
the face image acquisition unit is used for acquiring a face image of a user;
the segmentation unit is used for segmenting the face image according to a preset face image segmentation mode to obtain a plurality of segmentation areas when the face image is detected to be partially shielded;
the similarity calculation unit is used for comparing the characteristics of the unoccluded divided areas in the plurality of divided areas with the preset divided areas of the registered users to obtain the similarity of the unoccluded divided areas;
and the face recognition result determining unit is used for determining a face recognition result according to the similarity of the unoccluded segmentation areas.
The face recognition device 7 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The face recognition device may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is merely an example of the face recognition device 7 and is not intended to be limiting as the face recognition device 7 may include more or fewer components than shown, or some components may be combined, or different components, for example, the face recognition device may also include an input-output device, a network access device, a bus, etc.
The Processor 70 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the face recognition device 7, such as a hard disk or a memory of the face recognition device 7. The memory 71 may also be an external storage device of the face recognition device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the face recognition device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the face recognition device 7. The memory 71 is used to store the computer program and other programs and data required by the face recognition device. The memory 71 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A face recognition method is characterized by comprising the following steps:
collecting a face image of a user;
when the face image is detected to be partially shielded, segmenting the face image according to a preset face image segmentation mode to obtain a plurality of segmentation areas;
comparing the characteristics of the unoccluded divided areas in the plurality of divided areas with the preset divided areas of the registered users to obtain the similarity of the unoccluded divided areas;
and determining a face recognition result according to the similarity of the unoccluded segmentation region.
2. The face recognition method according to claim 1, wherein the step of segmenting the face image according to a preset face image segmentation mode when the face image is detected to be partially occluded comprises:
when the face image is detected to be partially shielded, acquiring the type of a shielding object shielding the face at present;
searching a segmentation mode corresponding to the current obstruction type according to the corresponding relation between the preset obstruction type and the segmentation mode;
and segmenting the human face according to the searched segmentation mode.
3. The face recognition method according to claim 2, wherein the type of the face mask includes one or more of a mask, a hat mask, a glasses mask, a phone mask, or a hair mask.
4. The face recognition method according to claim 1, wherein the step of determining the face recognition result according to the similarity of the unoccluded segmented regions comprises:
when the unoccluded segmentation areas comprise two or more than two segmentation areas, detecting whether the similarity of each unoccluded segmentation area is greater than a preset value;
if the similarity of each unoccluded segmentation area is greater than a preset value, confirming that the face recognition is successful;
or if the similarity of each non-shielded segmentation area is greater than a preset value, and the orientation features of the non-shielded segmentation areas are consistent with the preset orientation features of the registered user, confirming that the face recognition is successful.
5. The face recognition method according to claim 1, wherein the step of comparing the feature of the unoccluded segmented area in the plurality of segmented areas with the feature of the segmented area of a preset registered user to obtain the similarity of the unoccluded segmented area comprises:
obtaining the type of a shelter for shielding the face currently;
obtaining an effective segmentation area corresponding to the current obstruction type according to the corresponding relation between the preset obstruction type and the effective segmentation area;
and comparing the characteristics of the effective segmentation areas with segmentation areas corresponding to preset registered users to obtain the similarity of the effective segmentation areas.
6. The face recognition method according to claim 1, wherein before the step of comparing the feature of the unoccluded segmented region of the plurality of segmented regions with the feature of the segmented region of the preset registered user to obtain the similarity of the unoccluded segmented region, the method further comprises:
acquiring a face image of a registered user, and extracting face features of the face image of the registered user;
dividing the face image into a plurality of segmentation areas according to one or more preset face image segmentation modes.
7. The face recognition method according to any one of claims 1 to 6, wherein the segmented regions comprise one or more of eyebrow regions, eye regions, face contour regions, nose regions, and mouth regions.
8. A face recognition apparatus, characterized in that the face recognition apparatus comprises:
the face image acquisition unit is used for acquiring a face image of a user;
the segmentation unit is used for segmenting the face image according to a preset face image segmentation mode to obtain a plurality of segmentation areas when the face image is detected to be partially shielded;
the similarity calculation unit is used for comparing the characteristics of the unoccluded divided areas in the plurality of divided areas with the preset divided areas of the registered users to obtain the similarity of the unoccluded divided areas;
and the face recognition result determining unit is used for determining a face recognition result according to the similarity of the unoccluded segmentation areas.
9. A face recognition device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the face recognition method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the face recognition method according to any one of claims 1 to 7.
CN201911251194.1A 2019-12-09 2019-12-09 Face recognition method, device and equipment Pending CN113033244A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911251194.1A CN113033244A (en) 2019-12-09 2019-12-09 Face recognition method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911251194.1A CN113033244A (en) 2019-12-09 2019-12-09 Face recognition method, device and equipment

Publications (1)

Publication Number Publication Date
CN113033244A true CN113033244A (en) 2021-06-25

Family

ID=76452013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911251194.1A Pending CN113033244A (en) 2019-12-09 2019-12-09 Face recognition method, device and equipment

Country Status (1)

Country Link
CN (1) CN113033244A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449708A (en) * 2021-08-31 2021-09-28 深圳市爱深盈通信息技术有限公司 Face recognition method, face recognition device, equipment terminal and readable storage medium
CN114093012A (en) * 2022-01-18 2022-02-25 荣耀终端有限公司 Face shielding detection method and detection device
WO2023029702A1 (en) * 2021-09-06 2023-03-09 京东科技信息技术有限公司 Method and apparatus for verifying image

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449708A (en) * 2021-08-31 2021-09-28 深圳市爱深盈通信息技术有限公司 Face recognition method, face recognition device, equipment terminal and readable storage medium
CN113449708B (en) * 2021-08-31 2022-01-07 深圳市爱深盈通信息技术有限公司 Face recognition method, face recognition device, equipment terminal and readable storage medium
WO2023029702A1 (en) * 2021-09-06 2023-03-09 京东科技信息技术有限公司 Method and apparatus for verifying image
CN114093012A (en) * 2022-01-18 2022-02-25 荣耀终端有限公司 Face shielding detection method and detection device

Similar Documents

Publication Publication Date Title
US9239957B2 (en) Image processing method and apparatus
CN113033244A (en) Face recognition method, device and equipment
CN110852310B (en) Three-dimensional face recognition method and device, terminal equipment and computer readable medium
US10108793B2 (en) Systems and methods for secure biometric processing
CN109492642B (en) License plate recognition method, license plate recognition device, computer equipment and storage medium
CN111898413A (en) Face recognition method, face recognition device, electronic equipment and medium
CN111444555B (en) Temperature measurement information display method and device and terminal equipment
CN111814603B (en) Face recognition method, medium and electronic equipment
CN109948439B (en) Living body detection method, living body detection system and terminal equipment
CN110555926B (en) Access control method based on multi-certificate recognition and corresponding device
CN112214773B (en) Image processing method and device based on privacy protection and electronic equipment
CN111597910A (en) Face recognition method, face recognition device, terminal equipment and medium
CN112330715A (en) Tracking method, tracking device, terminal equipment and readable storage medium
Ravi et al. A novel method for touch-less finger print authentication
CN113837006B (en) Face recognition method and device, storage medium and electronic equipment
WO2024169261A1 (en) Image processing method and apparatus, and electronic device, computer-readable storage medium and computer program product
CN105631285A (en) Biological feature identity recognition method and apparatus
US20160125239A1 (en) Systems And Methods For Secure Iris Imaging
CN113033243A (en) Face recognition method, device and equipment
CN112416128B (en) Gesture recognition method and terminal equipment
CN108446653B (en) Method and apparatus for processing face image
CN112418189A (en) Face recognition method, device and equipment for wearing mask and storage medium
CN114913540A (en) Gesture recognition method and device and electronic equipment
CN113095116A (en) Identity recognition method and related product
CN117373103B (en) Image feature extraction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination