CN113850210A - Face image processing method and device and electronic equipment - Google Patents

Face image processing method and device and electronic equipment Download PDF

Info

Publication number
CN113850210A
CN113850210A CN202111150500.XA CN202111150500A CN113850210A CN 113850210 A CN113850210 A CN 113850210A CN 202111150500 A CN202111150500 A CN 202111150500A CN 113850210 A CN113850210 A CN 113850210A
Authority
CN
China
Prior art keywords
face
image
anchor point
point frame
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111150500.XA
Other languages
Chinese (zh)
Other versions
CN113850210B (en
Inventor
郑丹丹
王昌宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202111150500.XA priority Critical patent/CN113850210B/en
Publication of CN113850210A publication Critical patent/CN113850210A/en
Application granted granted Critical
Publication of CN113850210B publication Critical patent/CN113850210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the specification provides a face image processing method and device and electronic equipment. The method comprises the following steps: and mapping a first anchor point frame used for capturing an incomplete face image in a first area in the face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image. And mapping a second anchor point frame used for capturing the complete face image in a second area in the face collected image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face collected image, and the first anchor point frame and the second anchor point frame are different in size. And carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image, and determining at least one candidate anchor point frame face image. And determining the target anchor point frame face image from the candidate anchor point frame face based on the face size and/or face distance corresponding to each candidate anchor point frame face image. And executing image processing operation related to the face image based on the target anchor point frame face image.

Description

Face image processing method and device and electronic equipment
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a method and an apparatus for processing a face image, and an electronic device.
Background
With the development of face recognition technology, face images are becoming more and more popular on terminal devices. Currently, such applications are mainly used for recognizing complete face images. In many scenes, if a user does not shoot a complete face of the user, missing detection can be caused; even if the faces of other people appear, face selection errors can occur. The above problems all result in the face image not meeting the expectations of the user, and finally affect the use experience
Therefore, there is a need for a more intelligent face image, which can identify incomplete face images and accurately determine correct face images when multiple face images are scanned and photographed.
Disclosure of Invention
The embodiment of the specification aims to provide a face image processing method, a face image processing device and electronic equipment, which can identify incomplete face images and can accurately judge correct face images when a plurality of face images are scanned and shot.
In order to achieve the above object, the embodiments of the present specification are implemented as follows:
in a first aspect, a method for processing a face image is provided, including:
mapping a first anchor point frame used for capturing an incomplete face image in a first area in a face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image; and the number of the first and second groups,
mapping a second anchor point frame used for capturing a complete face image in a second area in the face collected image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face collected image, and the first anchor point frame and the second anchor point frame are different in size;
carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image to determine at least one candidate anchor point frame face image;
determining a target anchor point frame face image from the candidate anchor point frame faces based on the face size and/or face distance corresponding to each candidate anchor point frame face image;
and executing image processing operation related to the face image based on the target anchor point frame face image.
In a second aspect, a face image processing apparatus is provided, including:
the first anchor point frame mapping module is used for mapping a first anchor point frame used for capturing an incomplete face image in a first area in the face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image; and the number of the first and second groups,
a second anchor frame mapping module, which maps a second anchor frame used for capturing a complete face image in a second area in the face captured image to obtain a second anchor frame image, wherein the second area comprises a central area of the face captured image, and the first anchor frame and the second anchor frame have different sizes;
the face image recognition module is used for carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image and determining at least one candidate anchor point frame face image;
the face image selection module is used for determining a target anchor point frame face image from the candidate anchor point frame faces based on the face size and/or face distance corresponding to each candidate anchor point frame face image;
and the human face image module executes image processing operation related to the human face image based on the human face image of the target anchor point frame.
In a third aspect, an electronic device is provided that includes: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor to:
mapping a first anchor point frame used for capturing an incomplete face image in a first area in a face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image; and the number of the first and second groups,
mapping a second anchor point frame used for capturing a complete face image in a second area in the face collected image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face collected image, and the first anchor point frame and the second anchor point frame are different in size;
carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image to determine at least one candidate anchor point frame face image;
determining a target anchor point frame face image from the candidate anchor point frame faces based on the face size and/or face distance corresponding to each candidate anchor point frame face image;
and executing image processing operation related to the face image based on the target anchor point frame face image.
In a fourth aspect, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, performs the steps of:
mapping a first anchor point frame used for capturing an incomplete face image in a first area in a face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image; and the number of the first and second groups,
mapping a second anchor point frame used for capturing a complete face image in a second area in the face collected image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face collected image, and the first anchor point frame and the second anchor point frame are different in size;
carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image to determine at least one candidate anchor point frame face image;
determining a target anchor point frame face image from the candidate anchor point frame faces based on the face size and/or face distance corresponding to each candidate anchor point frame face image;
and executing image processing operation related to the face image based on the target anchor point frame face image.
According to the scheme of the embodiment of the specification, when the face is collected, besides the anchor point frame used for capturing the complete face image is mapped in the face collected image, the special anchor point frame used for capturing the incomplete face image is also mapped in the edge area of the face collected image, so that the recognition aiming at the incomplete face image is introduced, the face image is executed by intelligently selecting the target anchor point frame face image with high possibility from the candidate anchor point frame face images selected from all the anchor point frame frames according to the face size and/or the face distance, the blind area that the face image cannot take effect on the incomplete face image is made up, the face selection accuracy under a plurality of special scenes is improved on the basis of not influencing the original application performance, and the face image can better meet the expectation of a user.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present specification, and for a person of ordinary skill in the relevant art, other drawings can be obtained according to the drawings without inventive labor.
Fig. 1 is a schematic flowchart of a first flow of a face image processing method provided in an embodiment of the present specification.
Fig. 2 is a schematic flowchart of a second flow of a face image processing method provided in an embodiment of the present specification.
Fig. 3 is a schematic structural diagram of a face image processing apparatus according to an embodiment of the present disclosure.
Fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of this specification.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments in the present specification without any creative efforts shall fall within the protection scope of the present specification.
As mentioned above, the current face image mainly identifies the complete face image. In many scenes, if a user does not shoot a complete face of the user, missing detection can be caused; even if the faces of other people appear, face selection errors can occur. The above problems all result in the face image not meeting the expectations of the user, and finally affect the use experience. In order to improve the face scanning success rate, the document aims to provide a more intelligent face image, which can identify incomplete face images and accurately judge correct face images when a plurality of face images are scanned and collected.
Fig. 1 is a flowchart of a face image processing method according to an embodiment of the present disclosure, and as shown in fig. 1, the method may include the following steps:
s102, mapping a first anchor point frame used for capturing an incomplete face image in a first area in the face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image.
It will be appreciated that under normal circumstances, an incomplete face is typically located at the edge of the face captured image, resulting in a failure to obtain a full-face scan. Therefore, the embodiments of the present specification may map a first anchor point frame set for an incomplete face image with respect to an edge region of a face captured image.
Of course, if the terminal energy consumption is not considered, the step may also map the first anchor point frame to other positions of the face captured image, that is, the first region described herein at least includes an edge region of the face captured image.
And S104, mapping a second anchor point frame for capturing the complete face image in a second area in the face collected image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face collected image, and the first anchor point frame and the second anchor point frame are different in size.
It will be appreciated that under normal circumstances, a full face image is typically scanned with a relatively strong directionality, i.e., the full face image is mostly located near the center of the face captured image. Therefore, the embodiment of the present specification may map a second anchor point frame set for the complete face image in the central region of the face capture image.
Similarly, if the terminal resource consumption is not considered, the second anchor point frame may be mapped to other positions of the face captured image in this step, for example, in order to obtain better face recognition performance, the second anchor point frame may be mapped to all positions of the face captured image, that is, the second region described herein may include an edge region of the face captured image.
And S106, carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image, and determining at least one candidate anchor point frame face image.
It should be understood that after the first anchor point frame and the second anchor point frame are mapped to the face captured image, the framed images of the first anchor point frame and the second anchor point frame are not necessarily face images, and therefore face recognition screening is also required.
Optionally, this step may be based on an artificial intelligence technique, and identify, in a mechanical manner, face images existing in the first anchor frame image and the second anchor frame image, that is, candidate anchor frame face images. It should be understood that the candidate anchor frame face image selected from the first anchor frame image is a non-complete face image, and the candidate anchor frame face image selected from the second anchor frame image may be regarded as a complete face image.
Specifically, for face recognition screening of the first anchor frame image, a first face recognition model for recognizing the incomplete face image may be supervised and trained on the basis of the sample incomplete face image and a corresponding classification label (the classification label labels whether each sample incomplete face image is a positive sample or a negative sample). In order to improve the capability of the first face recognition model for recognizing the incomplete face image, a sample incomplete face image with the head and shoulder characteristics can be introduced for training. That is, in the embodiments of the present specification, features of five sense organs and features of a head and a shoulder may be used as a bottom-layer vector of a first face recognition model, and an encoder for extracting features of five sense organs and features of a head and a shoulder from an image input to the first face recognition model is provided (the features of five sense organs and the features of a head and a shoulder extracted by the encoder are used for inputting the bottom-layer vector).
Similarly, the face recognition screening for the second anchor frame image may be based on the sample complete face image and the corresponding classification label in advance, and a second face recognition model for recognizing the incomplete face image is supervised-trained. Correspondingly, in the step, after the second anchor point frame image is input to the trained second face recognition model for face recognition, the candidate anchor point frame face image can be obtained by direct screening.
And S108, determining the target anchor point frame face image from the candidate anchor point frame face based on the face size and/or face distance corresponding to each candidate anchor point frame face image.
It should be understood that, in general, the face image of the candidate anchor point frame closest to the acquisition position has a higher probability of being the correct face image. Therefore, the candidate anchor point frame face image with the face closest to the acquisition position can be used as the target anchor point frame face image in the step. It should be noted that, the manner of calculating the distance between the candidate anchor frame face image and the acquisition position is not exclusive, and the disclosure is not limited specifically. By way of example introduction, the distance of a candidate anchor box face image to an acquisition location may be quantified based on a pixel parameter (e.g., a pixel average depth value) in the candidate anchor box face image.
In addition, the candidate anchor point frame face image with the largest face size also has a high probability of being the correct face image. Therefore, the candidate anchor point frame face image with the largest face size can be used as the target anchor point frame face image in the step. Since the candidate anchor frame face image selected from the first anchor frame image belongs to an incomplete face image, the complete face completion is required before comparing the face size with the candidate anchor frame face image selected from the second anchor frame image. The complementing principle is that complete face prediction is carried out on the basis of the size of an original face corresponding to a candidate anchor point frame face image. As one implementation manner, in the embodiment of the present specification, the size of the anchor point frame corresponding to the candidate anchor point frame image may be directly regarded as the size of the original face, and the anchor point frame of the candidate anchor point frame face image screened from the first anchor point frame image is enlarged (the enlarged aspect ratio is consistent with the standard aspect ratio) with the aspect ratio of the second anchor point frame as a standard, so as to obtain the size of the complete face.
Of course, the above two manners of determining the target anchor frame face image based on the face size and the face distance may also be used in combination, for example, by performing weighted calculation on the face size and the face distance, determining the comprehensive probability that each candidate anchor frame image is used as the target anchor frame face image, and selecting the one with the highest comprehensive probability as the target anchor frame face image. In addition, if there is only one screened candidate anchor frame face image, the step can directly determine the target anchor frame face image from the unique candidate anchor frame face image, so that the calculation process is omitted.
And S110, executing image processing operation related to the face image based on the face image of the target anchor point frame.
It should be understood that the image processing operations described herein may include at least one of:
identity recognition operation based on the face image;
beautifying operation based on the face image;
and detecting the body temperature based on the human face image.
The image processing operation may be from a face image application. The face image application may include: face identification application, face beauty application, face temperature detection application, and the like, which are not specifically limited herein.
In addition, in order to avoid that the selected target anchor frame face image is not the face image expected by the user, the user can confirm the target anchor frame face image before the face image is processed.
Namely, the step initiates a user confirmation prompt aiming at the target anchor point frame face image; if user confirmation operation aiming at the target anchor point frame face image is received, executing image processing operation related to the face image based on the target anchor point frame face image; and if the user denial operation aiming at the target anchor point frame face image is received, selecting a new target anchor point frame face image from the residual candidate anchor point frame face images, and initiating a user confirmation prompt aiming at the new target anchor point frame face image until the user confirms the target anchor point frame face image or no candidate anchor point frame face image is available for the user to confirm.
In addition, in order to avoid the influence of the face image which is subjected to butt joint after the incomplete face image is selected as the face image of the target anchor point frame, the complete rendering of the complete face image can be performed on the face image of the target anchor point frame firstly, and then the face image is performed on the rendered face image of the target anchor point frame, or the rendered face image of the target anchor point frame can be displayed in a face image acquisition interface so as to remind a user whether the acquisition position needs to be corrected or not. It should be noted that the solution of completion rendering is not specifically limited herein. By way of exemplary introduction, the face image missing from the target anchor frame face image may be rendered pixel-complementally symmetrically by an image technique based on the pixel information of the existing face image of the target anchor frame face image.
Therefore, based on the method of the embodiment of the specification, when the face is collected, besides the anchor point frame used for capturing the complete face image is mapped in the face collected image, the special anchor point frame used for capturing the incomplete face image is also mapped in the edge area of the face collected image, so that the identification aiming at the incomplete face image is introduced, the face image is executed by intelligently selecting the target anchor point frame face image with high possibility from the candidate anchor point frame face images selected from all the anchor point frame frames according to the face size and/or the face distance, the blind area that the face image cannot take effect on the incomplete face image is solved, the face selection accuracy under a plurality of special scenes is improved on the basis of not influencing the original performance, and the face image can better meet the expectation of a user.
The method of the embodiment of the present description is described below with reference to an application scenario of face brushing payment. The corresponding process is as follows:
s201, after the face collection is started by the face brushing payment application, a first anchor point frame used for capturing an incomplete face image is mapped on the edge area of the face collected image, and meanwhile, a second anchor point frame used for capturing the complete face image is projected on the whole area of the face collected image.
The first anchor point frame adopts the aspect ratio which is common to the incomplete face image, and the purpose of mapping the first anchor point frame only on the edge area is to save final calculation consumption. In the step, a large number of first anchor points are mapped to try to frame out incomplete face images in the face acquisition images, and similarly, a large number of second anchor points are mapped to try to frame out complete face images in the face acquisition images.
S202, carrying out face recognition on a first anchor point frame image defined by the first anchor point frame and a second anchor point frame image defined by the second anchor point frame, and determining candidate anchor point frame face images.
In this step, after the candidate anchor point frame face images are determined, duplication may be removed, for example, only one of the candidate anchor point frame face images with the coincidence degree reaching the preset standard is retained.
And S203, correcting the anchor point frame of each candidate anchor point frame face image again.
Specifically, the anchor point frame correction corresponding to each candidate anchor point frame face image may be corrected in a mechanical manner by an artificial intelligence technique in this step. The method comprises the steps of inputting a first anchor point frame image into a preset first anchor point frame adjusting model to correct the size and/or position of a first anchor point frame corresponding to the first anchor point frame image, wherein the first anchor point frame adjusting model is obtained by training a sample incomplete face image divided by the first anchor point frame and a corresponding classification label. And similarly, inputting the second anchor frame image into a preset second anchor frame adjusting model to correct the size and/or position of a second anchor frame corresponding to the second anchor frame image, wherein the second anchor frame adjusting model is obtained by training a sample complete face image divided by the second anchor frame and a corresponding classification label.
S204, judging whether to screen out a candidate anchor point frame face image from the first anchor point frame image; if yes, go to S205; otherwise, S206 is executed skipping S205.
S205, the candidate anchor point frame face images are screened from the first anchor point frame image, and the whole face size is predicted.
Here, in order to simplify the calculation, the face size presented by the candidate anchor frame face image selected from the second anchor frame image may be regarded as the full face size, that is, the full face size prediction is not performed on the candidate anchor frame face image selected from the second anchor frame image.
And S206, selecting one of the candidate anchor point frame face images with the largest face size of the complete face as a target anchor point frame face image.
It should be noted that, even if the candidate anchor frame face image belongs to the incomplete face image, the probability of being selected as the target anchor frame face image is greater than the candidate anchor frame face image belonging to the complete face image after the complete face size prediction. In addition, in the application scenario, a candidate anchor point frame face image with a face position closest to the acquisition position may also be selected as the target anchor point frame face image, and the principle is introduced above, so details are not repeated here for example.
And S207, performing face verification of face brushing payment (application) based on the target anchor point frame face image.
And carrying out face verification of face brushing payment based on the target anchor point frame face image.
It should be understood that this application scenario is an exemplary description of the method according to the embodiment of the present invention, and the method according to this embodiment may also be applied to other application scenarios such as human face temperature detection, facial beautification, and the like, which is not described in detail herein for example. Suitable changes may be made in the steps of the method of embodiments without departing from the principles described herein, and such changes are to be considered within the scope of the embodiments of the invention.
In addition, the embodiment of the invention also provides a face image processing device corresponding to the method shown in fig. 1. Fig. 3 is a schematic structural diagram of a face image processing apparatus 300 according to an embodiment of the present invention, including:
the first anchor frame mapping module 310 maps a first anchor frame used for capturing an incomplete face image in a first region of a face captured image to obtain a first anchor frame image, where the first region includes an edge region of the face captured image.
A second anchor frame mapping module 320, configured to map a second anchor frame used for capturing a complete face image in a second region of the face captured image to obtain a second anchor frame image, where the second region includes a central region of the face captured image, and the first anchor frame and the second anchor frame have different sizes;
the face image recognition module 330 is configured to perform face recognition screening on the first anchor frame image and the second anchor frame image to determine at least one candidate anchor frame face image;
the face image selection module 340 determines a target anchor point frame face image from the candidate anchor point frame faces based on the face size and/or face distance corresponding to each candidate anchor point frame face image;
the face image module 350 performs image processing operations related to the face image based on the target anchor frame face image.
According to the scheme of the embodiment of the specification, when the face is collected, besides the anchor point frame used for capturing the complete face image is mapped in the face collected image, the special anchor point frame used for capturing the incomplete face image is also mapped in the edge area of the face collected image, so that the recognition aiming at the incomplete face image is introduced, the face image is executed by intelligently selecting the target anchor point frame face image with high possibility from the candidate anchor point frame face images selected from all the anchor point frame frames according to the face size and/or the face distance, the blind area that the face image cannot take effect on the incomplete face image is made up, the face selection accuracy under a plurality of special scenes is improved on the basis of not influencing the original application performance, and the face image can better meet the expectation of a user.
Optionally, the face image recognition module 330 inputs the first anchor frame image to a first face recognition model for face recognition, and determines a candidate anchor frame face image obtained by screening the first face recognition model, where the first face recognition model is obtained by training based on a sample incomplete face image and a corresponding classification label.
The first face recognition model is provided with bottom-layer vectors of facial features and head-shoulder features, and an encoder for extracting the facial features and the head-shoulder features from an image input to the first face recognition model, wherein the facial features and the head-shoulder features extracted by the encoder are used for inputting the bottom-layer vectors.
Optionally, the face image recognition module 330 inputs a second anchor frame image to a second face recognition model for face recognition, and determines a candidate anchor frame face image obtained by screening the second face recognition model, where the second face recognition model is obtained by training based on a sample complete face image and a corresponding classification label.
Optionally, the face image recognition module 330 further inputs the first anchor frame image into a first anchor frame adjustment model before performing face recognition screening on the first anchor frame image and the second anchor frame image, so as to correct the size and/or the position of the first anchor frame corresponding to the first anchor frame image, where the first anchor frame adjustment model is trained based on the sample incomplete face image divided by the first anchor frame and the corresponding classification label; and/or inputting the second anchor frame image into a second anchor frame adjustment model to correct the size and/or position of a second anchor frame corresponding to the second anchor frame image, wherein the second anchor frame adjustment model is obtained by training a sample complete face image divided by the second anchor frame and a corresponding classification label.
Optionally, the face image selection module 340 selects one of the candidate anchor frame face images with the largest face size or the face closest to the acquisition position as the target anchor frame face image. For example: on the basis of the size of an original face corresponding to the candidate anchor point frame face image, performing complete face completion on each candidate anchor point frame face image to obtain the face size of the complete face, and selecting one with the largest face size of the complete face from the candidate anchor point frame face images as a target anchor point frame face image; or determining the distance from the face position in the face image of each candidate anchor point frame to the acquisition position based on the pixel depth information corresponding to the face image of each candidate anchor point frame, and selecting one of the face positions closest to the acquisition position as the face image of the target anchor point frame.
In addition, if there is only one candidate anchor frame face image, the face image selection module 340 directly uses the candidate anchor frame face image as the target anchor frame face image.
Optionally, the facial image module 350 specifically initiates a user confirmation prompt for the facial image of the target anchor box; if user confirmation operation aiming at the target anchor point frame face image is received, executing image processing operation related to the face image based on the target anchor point frame face image; and if the user denial operation aiming at the target anchor point frame face image is received, selecting other candidate anchor point frame face images as a new target anchor point frame face image, and initiating a user confirmation prompt aiming at the new target anchor point frame face image.
Obviously, the face image processing apparatus of the embodiment of the present specification can be used as an execution subject of the method shown in fig. 1, and thus can implement the steps and functions of the method implemented in fig. 1 and fig. 2. Since the principle is the same, the detailed description is omitted here.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. Referring to fig. 4, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 4, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads a corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the human face image processing device on a logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
mapping a first anchor point frame used for capturing an incomplete face image in a first area in a face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image. And the number of the first and second groups,
and mapping a second anchor point frame used for capturing a complete face image in a second area in the face acquisition image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face acquisition image, and the first anchor point frame and the second anchor point frame are different in size.
And carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image, and determining at least one candidate anchor point frame face image.
And determining the target anchor point frame face image from the candidate anchor point frame face based on the face size and/or face distance corresponding to each candidate anchor point frame face image.
And executing image processing operation related to the face image based on the target anchor point frame face image.
According to the electronic equipment based on the embodiment of the specification, when the face is collected, besides the anchor point frame used for capturing the complete face image is mapped in the face collected image, the special anchor point frame used for capturing the incomplete face image is also mapped in the edge area of the face collected image, so that the recognition aiming at the incomplete face image is introduced, the face image is executed by intelligently selecting the target anchor point frame face image with high possibility from the candidate anchor point frame face images selected from all the anchor point frame frames according to the face size and/or the face distance, the blind area that the face image cannot take effect on the incomplete face image is made up, the face selection accuracy under a plurality of special scenes is improved on the basis of not influencing the original application performance, and the face image can better meet the expectation of a user.
The forensics method disclosed in the embodiment of fig. 1 in this specification may be implemented in or by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in a hardware decoding processor, or in a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
It should be understood that the electronic device of the embodiment of the present specification can implement the functions of the embodiments of the face image processing method shown in fig. 1 and fig. 2. Since the principle is the same, the detailed description is omitted here.
Of course, besides the software implementation, the electronic device in this specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Furthermore, the present specification embodiments also propose a computer-readable storage medium storing one or more programs, the one or more programs including instructions.
Wherein the instructions, when executed by a portable electronic device comprising a plurality of application programs, enable the portable electronic device to perform the method of the embodiment shown in fig. 1, and are specifically configured to perform the following steps:
mapping a first anchor point frame used for capturing an incomplete face image in a first area in a face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image. And the number of the first and second groups,
and mapping a second anchor point frame used for capturing a complete face image in a second area in the face acquisition image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face acquisition image, and the first anchor point frame and the second anchor point frame are different in size.
And carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image, and determining at least one candidate anchor point frame face image.
And determining the target anchor point frame face image from the candidate anchor point frame face based on the face size and/or face distance corresponding to each candidate anchor point frame face image.
And executing image processing operation related to the face image based on the target anchor point frame face image.
According to the electronic equipment based on the embodiment of the specification, when the face is collected, besides the anchor point frame used for capturing the complete face image is mapped in the face collected image, the special anchor point frame used for capturing the incomplete face image is also mapped in the edge area of the face collected image, so that the recognition aiming at the incomplete face image is introduced, the face image is executed by intelligently selecting the target anchor point frame face image with high possibility from the candidate anchor point frame face images selected from all the anchor point frame frames according to the face size and/or the face distance, the blind area that the face image cannot take effect on the incomplete face image is made up, the face selection accuracy under a plurality of special scenes is improved on the basis of not influencing the original application performance, and the face image can better meet the expectation of a user.
As will be appreciated by one of ordinary skill in the art, the embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and changes may occur to those skilled in the art to which it pertains. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification. Moreover, all other embodiments obtained by persons of ordinary skill in the art without making any inventive step shall fall within the scope of protection of this document.

Claims (14)

1. A face image processing method comprises the following steps:
mapping a first anchor point frame used for capturing an incomplete face image in a first area in a face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image; and the number of the first and second groups,
mapping a second anchor point frame used for capturing a complete face image in a second area in the face collected image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face collected image, and the first anchor point frame and the second anchor point frame are different in size;
carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image to determine at least one candidate anchor point frame face image;
determining a target anchor point frame face image from the candidate anchor point frame faces based on the face size and/or face distance corresponding to each candidate anchor point frame face image;
and executing image processing operation related to the face image based on the target anchor point frame face image.
2. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
carrying out face recognition screening on the first anchor frame image and the second anchor frame image, and determining at least one candidate anchor frame face image, wherein the method comprises the following steps:
inputting the first anchor point frame image into a first face recognition model for face recognition, and determining candidate anchor point frame face images obtained by screening the first face recognition model, wherein the first face recognition model is obtained by training based on a sample incomplete face image and a corresponding classification label.
3. The method of claim 2, wherein the first and second light sources are selected from the group consisting of,
the first face recognition model is provided with bottom-layer vectors of facial features and head-shoulder features, and an encoder for extracting the facial features and the head-shoulder features from an image input to the first face recognition model, wherein the facial features and the head-shoulder features extracted by the encoder are used for inputting the bottom-layer vectors.
4. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
carrying out face recognition screening on the first anchor frame image and the second anchor frame image, and determining at least one candidate anchor frame face image, wherein the method comprises the following steps:
inputting the second anchor point frame image into a second face recognition model for face recognition, and determining a candidate anchor point frame face image obtained by screening the second face recognition model, wherein the second face recognition model is obtained by training based on a sample complete face image and a corresponding classification label.
5. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
before the face recognition screening is performed on the first anchor frame image and the second anchor frame image, the method further comprises the following steps:
inputting the image of the first anchor point frame into a first anchor point frame adjustment model so as to correct the size and/or position of the first anchor point frame corresponding to the image of the first anchor point frame, wherein the first anchor point frame adjustment model is obtained by training a sample incomplete face image divided by the first anchor point frame and a corresponding classification label;
and/or the presence of a gas in the gas,
and inputting the second anchor point frame image into a second anchor point frame adjustment model so as to correct the size and/or position of a second anchor point frame corresponding to the second anchor point frame image, wherein the second anchor point frame adjustment model is obtained by training a sample complete face image divided by the second anchor point frame and a corresponding classification label.
6. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
determining a target anchor point frame face image from the candidate anchor point frame face based on the face size or the face depth corresponding to each candidate anchor point frame face image, and the method comprises the following steps:
and selecting one of the face images with the largest face size or the face closest to the acquisition position from the candidate anchor point frame face images as a target anchor point frame face image.
7. The method of claim 6, wherein the first and second light sources are selected from the group consisting of,
selecting one of the face images with the largest face size or the face position closest to the acquisition position from the candidate anchor point frame face images as a target anchor point frame face image, wherein the method comprises the following steps:
on the basis of the size of an original face corresponding to the candidate anchor point frame face image, performing complete face completion on each candidate anchor point frame face image to obtain the face size of the complete face, and selecting one with the largest face size of the complete face from the candidate anchor point frame face images as a target anchor point frame face image; alternatively, the first and second electrodes may be,
and determining the distance from the face position in the face image of each candidate anchor point frame to the acquisition position based on the pixel depth information corresponding to the face image of each candidate anchor point frame, and selecting one of the face positions closest to the acquisition position as the face image of the target anchor point frame.
8. The method according to any one of claims 1 to 7,
determining a target anchor point frame face image from the candidate anchor point frame face based on the face size and/or face distance corresponding to each candidate anchor point frame face image, comprising:
and if one and only one candidate anchor point frame face image exists, directly taking the candidate anchor point frame face image as the target anchor point frame face image.
9. The method according to any one of claims 1 to 7,
if the number of the anchor point frame face images is more than one, based on the target anchor point frame face image, executing image processing operation related to the face image, and further comprising:
initiating a user confirmation prompt aiming at the target anchor point frame face image;
if user confirmation operation aiming at the target anchor point frame face image is received, executing image processing operation related to the face image based on the target anchor point frame face image;
and if the user denial operation aiming at the target anchor point frame face image is received, selecting other candidate anchor point frame face images as a new target anchor point frame face image, and initiating a user confirmation prompt aiming at the new target anchor point frame face image.
10. The method according to any one of claims 1 to 7,
the second region further comprises an edge region of the face capture image.
11. The method of any of claims 1-7, wherein the facial image-dependent image processing operation comprises at least one of:
identity recognition operation based on the face image;
beautifying operation based on the face image;
and detecting the body temperature based on the human face image.
12. A face image processing apparatus comprising:
the first anchor point frame mapping module is used for mapping a first anchor point frame used for capturing an incomplete face image in a first area in the face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image;
a second anchor frame mapping module, which maps a second anchor frame used for capturing a complete face image in a second area in the face captured image to obtain a second anchor frame image, wherein the second area comprises a central area of the face captured image, and the first anchor frame and the second anchor frame have different sizes;
the face image recognition module is used for carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image and determining at least one candidate anchor point frame face image;
the face image selection module is used for determining a target anchor point frame face image from the candidate anchor point frame faces based on the face size and/or face distance corresponding to each candidate anchor point frame face image;
and the human face image module executes image processing operation related to the human face image based on the human face image of the target anchor point frame.
13. An electronic device includes: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor to:
mapping a first anchor point frame used for capturing an incomplete face image in a first area in a face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image; and the number of the first and second groups,
mapping a second anchor point frame used for capturing a complete face image in a second area in the face collected image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face collected image, and the first anchor point frame and the second anchor point frame are different in size;
carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image to determine at least one candidate anchor point frame face image;
determining a target anchor point frame face image from the candidate anchor point frame faces based on the face size and/or face distance corresponding to each candidate anchor point frame face image;
and executing image processing operation related to the face image based on the target anchor point frame face image.
14. A computer-readable storage medium having a computer program stored thereon, which when executed by a processor, performs the steps of:
mapping a first anchor point frame used for capturing an incomplete face image in a first area in a face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image; and the number of the first and second groups,
mapping a second anchor point frame used for capturing a complete face image in a second area in the face collected image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face collected image, and the first anchor point frame and the second anchor point frame are different in size;
carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image to determine at least one candidate anchor point frame face image;
determining a target anchor point frame face image from the candidate anchor point frame faces based on the face size and/or face distance corresponding to each candidate anchor point frame face image;
and executing image processing operation related to the face image based on the target anchor point frame face image.
CN202111150500.XA 2021-09-29 2021-09-29 Face image processing method and device and electronic equipment Active CN113850210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111150500.XA CN113850210B (en) 2021-09-29 2021-09-29 Face image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111150500.XA CN113850210B (en) 2021-09-29 2021-09-29 Face image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113850210A true CN113850210A (en) 2021-12-28
CN113850210B CN113850210B (en) 2024-05-17

Family

ID=78977121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111150500.XA Active CN113850210B (en) 2021-09-29 2021-09-29 Face image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113850210B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423707A (en) * 2017-07-25 2017-12-01 深圳帕罗人工智能科技有限公司 A kind of face Emotion identification method based under complex environment
CN107679514A (en) * 2017-10-20 2018-02-09 维沃移动通信有限公司 A kind of face identification method and electronic equipment
CN109409210A (en) * 2018-09-11 2019-03-01 北京飞搜科技有限公司 A kind of method for detecting human face and system based on SSD frame
WO2019145578A1 (en) * 2018-06-11 2019-08-01 Fotonation Limited Neural network image processing apparatus
CN111401283A (en) * 2020-03-23 2020-07-10 北京达佳互联信息技术有限公司 Face recognition method and device, electronic equipment and storage medium
CN111652051A (en) * 2020-04-21 2020-09-11 高新兴科技集团股份有限公司 Face detection model generation method, device, equipment and storage medium
CN112541483A (en) * 2020-12-25 2021-03-23 三峡大学 Dense face detection method combining YOLO and blocking-fusion strategy
US20210150171A1 (en) * 2018-07-27 2021-05-20 Vivo Mobile Communication Co., Ltd. Object recognition method and mobile terminal
US20210295088A1 (en) * 2020-12-11 2021-09-23 Beijing Baidu Netcom Science & Technology Co., Ltd Image detection method, device, storage medium and computer program product

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423707A (en) * 2017-07-25 2017-12-01 深圳帕罗人工智能科技有限公司 A kind of face Emotion identification method based under complex environment
CN107679514A (en) * 2017-10-20 2018-02-09 维沃移动通信有限公司 A kind of face identification method and electronic equipment
WO2019145578A1 (en) * 2018-06-11 2019-08-01 Fotonation Limited Neural network image processing apparatus
US20210150171A1 (en) * 2018-07-27 2021-05-20 Vivo Mobile Communication Co., Ltd. Object recognition method and mobile terminal
CN109409210A (en) * 2018-09-11 2019-03-01 北京飞搜科技有限公司 A kind of method for detecting human face and system based on SSD frame
CN111401283A (en) * 2020-03-23 2020-07-10 北京达佳互联信息技术有限公司 Face recognition method and device, electronic equipment and storage medium
CN111652051A (en) * 2020-04-21 2020-09-11 高新兴科技集团股份有限公司 Face detection model generation method, device, equipment and storage medium
US20210295088A1 (en) * 2020-12-11 2021-09-23 Beijing Baidu Netcom Science & Technology Co., Ltd Image detection method, device, storage medium and computer program product
CN112541483A (en) * 2020-12-25 2021-03-23 三峡大学 Dense face detection method combining YOLO and blocking-fusion strategy

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
韩毓;: "移动设备网络安全下人脸终端身份识别仿真", 计算机仿真, no. 10, 15 October 2017 (2017-10-15), pages 352 - 356 *

Also Published As

Publication number Publication date
CN113850210B (en) 2024-05-17

Similar Documents

Publication Publication Date Title
CN110232369B (en) Face recognition method and electronic equipment
KR101303877B1 (en) Method and apparatus for serving prefer color conversion of skin color applying face detection and skin area detection
CN107220640B (en) Character recognition method, character recognition device, computer equipment and computer-readable storage medium
CN109492577B (en) Gesture recognition method and device and electronic equipment
CN111931594A (en) Face recognition living body detection method and device, computer equipment and storage medium
CN110503059B (en) Face recognition method and system
CN111091089B (en) Face image processing method and device, electronic equipment and storage medium
JP4496005B2 (en) Image processing method and image processing apparatus
CN112766065A (en) Mobile terminal examinee identity authentication method, device, terminal and storage medium
CN113850210A (en) Face image processing method and device and electronic equipment
CN116052230A (en) Palm vein recognition method, device, equipment and storage medium
CN116129484A (en) Method, device, electronic equipment and storage medium for model training and living body detection
CN113014914B (en) Neural network-based single face-changing short video identification method and system
WO2017219562A1 (en) Method and apparatus for generating two-dimensional code
CN114332981A (en) Face living body detection method and device, electronic equipment and storage medium
CN114596638A (en) Face living body detection method, device and storage medium
CN111275045A (en) Method and device for identifying image subject, electronic equipment and medium
CN112116523B (en) Image processing method, device, terminal and medium for portrait hair
CN115082995B (en) Face living body detection method and device and electronic equipment
CN113516089B (en) Face image recognition method, device, equipment and readable storage medium
JP2004199200A (en) Pattern recognition device, imaging apparatus, information processing system, pattern recognition method, recording medium and program
CN115116147B (en) Image recognition, model training, living body detection method and related device
Dixit et al. SIFRS: Spoof Invariant Facial Recognition System (A Helping Hand for Visual Impaired People)
CN115082991A (en) Face living body detection method and device and electronic equipment
CN113505682A (en) Living body detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant