CN113850210B - Face image processing method and device and electronic equipment - Google Patents
Face image processing method and device and electronic equipment Download PDFInfo
- Publication number
- CN113850210B CN113850210B CN202111150500.XA CN202111150500A CN113850210B CN 113850210 B CN113850210 B CN 113850210B CN 202111150500 A CN202111150500 A CN 202111150500A CN 113850210 B CN113850210 B CN 113850210B
- Authority
- CN
- China
- Prior art keywords
- face
- image
- anchor point
- point frame
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 11
- 238000000034 method Methods 0.000 claims abstract description 38
- 238000013507 mapping Methods 0.000 claims abstract description 32
- 238000012545 processing Methods 0.000 claims abstract description 32
- 238000012216 screening Methods 0.000 claims abstract description 25
- 230000001815 facial effect Effects 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 10
- 238000012790 confirmation Methods 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 6
- 239000013598 vector Substances 0.000 claims description 6
- 230000000977 initiatory effect Effects 0.000 claims description 4
- 230000036760 body temperature Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001681 protective effect Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the specification provides a face image processing method, a face image processing device and electronic equipment. The method comprises the following steps: and mapping a first anchor point frame for capturing the incomplete face image in a first area in the face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image. And mapping a second anchor point frame for capturing the whole face image in a second area in the face acquisition image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face acquisition image, and the sizes of the first anchor point frame and the second anchor point frame are different. And carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image, and determining at least one candidate anchor point frame face image. And determining the target anchor point frame face image from the candidate anchor point frame face based on the face size and/or the face distance corresponding to each candidate anchor point frame face image. And executing image processing operation related to the face image based on the target anchor point frame face image.
Description
Technical Field
The document belongs to the technical field of information processing, and particularly relates to a face image processing method and device and electronic equipment.
Background
Along with the development of face recognition technology, face images are becoming more and more popular on terminal devices. At present, the application mainly carries out identification on complete face images. In many scenarios, if the user does not pat his or her complete face, missed detection may result; even if the faces of other people appear, the face selection error can occur. The problems can lead to the fact that the face image can not meet the expectations of users, and finally the use experience is influenced
Therefore, there is a need for a more intelligent face image that can identify incomplete face images and accurately determine correct face images when a plurality of face images are scanned.
Disclosure of Invention
An embodiment of the present disclosure aims to provide a face image processing method, a device, and an electronic apparatus, which can identify an incomplete face image, and accurately determine a correct face image when a plurality of face images are scanned.
In order to achieve the above object, the embodiments of the present specification are implemented as follows:
in a first aspect, a face image processing method is provided, including:
mapping a first anchor point frame for capturing an incomplete face image in a first area in a face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image; and
Mapping a second anchor point frame for capturing the whole face image in a second area in the face acquisition image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face acquisition image, and the sizes of the first anchor point frame and the second anchor point frame are different;
performing face recognition screening on the first anchor frame image and the second anchor frame image to determine at least one candidate anchor frame face image;
determining a target anchor point frame face image from the candidate anchor point frame faces based on the face sizes and/or the face distances corresponding to the candidate anchor point frame face images;
And executing image processing operation related to the face image based on the face image of the target anchor point frame.
In a second aspect, there is provided a face image processing apparatus including:
the first anchor point frame mapping module is used for mapping a first anchor point frame for capturing an incomplete face image in a first area in the face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image; and
The second anchor frame mapping module is used for mapping a second anchor frame used for capturing the whole face image in a second area in the face acquisition image to obtain a second anchor frame image, the second area comprises a central area of the face acquisition image, and the sizes of the first anchor frame and the second anchor frame are different;
the face image recognition module is used for carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image and determining at least one candidate anchor point frame face image;
The face image selection module is used for determining a target anchor point frame face image from candidate anchor point frame faces based on face sizes and/or face distances corresponding to the candidate anchor point frame face images;
and the face image module is used for executing image processing operation related to the face image based on the target anchor point frame face image.
In a third aspect, there is provided an electronic device comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor:
mapping a first anchor point frame for capturing an incomplete face image in a first area in a face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image; and
Mapping a second anchor point frame for capturing the whole face image in a second area in the face acquisition image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face acquisition image, and the sizes of the first anchor point frame and the second anchor point frame are different;
performing face recognition screening on the first anchor frame image and the second anchor frame image to determine at least one candidate anchor frame face image;
determining a target anchor point frame face image from the candidate anchor point frame faces based on the face sizes and/or the face distances corresponding to the candidate anchor point frame face images;
And executing image processing operation related to the face image based on the face image of the target anchor point frame.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
mapping a first anchor point frame for capturing an incomplete face image in a first area in a face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image; and
Mapping a second anchor point frame for capturing the whole face image in a second area in the face acquisition image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face acquisition image, and the sizes of the first anchor point frame and the second anchor point frame are different;
performing face recognition screening on the first anchor frame image and the second anchor frame image to determine at least one candidate anchor frame face image;
determining a target anchor point frame face image from the candidate anchor point frame faces based on the face sizes and/or the face distances corresponding to the candidate anchor point frame face images;
And executing image processing operation related to the face image based on the face image of the target anchor point frame.
According to the scheme of the embodiment of the specification, when the face acquisition is carried out, besides the anchor point frames used for capturing the complete face image are mapped in the face acquisition image, the special anchor point frames used for capturing the incomplete face image are mapped in the edge area of the face acquisition image, so that the identification of the incomplete face image is introduced, and the face image is executed by intelligently selecting the target anchor point frame face image with higher possibility from the candidate anchor point frame face images selected from the anchor point frame frames according to the face size and/or the face distance, so that the blind area that the face image cannot be effective on the incomplete face image is made up, the face selection accuracy in many special scenes is improved on the basis that the original application performance is not influenced, and the face image can be more in line with the expectations of users.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some of the embodiments described in the embodiments of the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the relevant art.
Fig. 1 is a schematic flow chart of a first face image processing method according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of a second flow of a face image processing method according to an embodiment of the present disclosure.
Fig. 3 is a schematic structural diagram of a face image processing apparatus according to an embodiment of the present disclosure.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which are obtained by persons of ordinary skill in the art without creative efforts, based on the embodiments in the present specification should be considered as falling within the protection scope of the present specification.
As mentioned above, the present face image mainly identifies the complete face image. In many scenarios, if the user does not pat his or her complete face, missed detection may result; even if the faces of other people appear, the face selection error can occur. The above problems all cause that the face image cannot meet the expectations of users, and finally the use experience is affected. In order to improve lose face success rate, the document aims to provide a more intelligent face image, which can identify incomplete face images and accurately judge correct face images when a plurality of face images are scanned and acquired.
Fig. 1 is a flowchart of a face image processing method according to an embodiment of the present disclosure, and as shown in fig. 1, the method may include the steps of:
S102, mapping a first anchor point frame for capturing an incomplete face image in a first area in the face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image.
It will be appreciated that under normal circumstances, an incomplete face is typically located at the edge of the face acquisition image, resulting in a failure to obtain a full face scan. Therefore, the embodiment of the present specification may map, for an edge region of a face acquisition image, a first anchor block set for a non-complete face image.
Of course, if the terminal energy consumption is not considered, the first anchor frame may be mapped to other positions of the face acquisition image in this step, that is, the first area includes at least an edge area of the face acquisition image.
S104, mapping a second anchor point frame for capturing the whole face image in a second area in the face acquisition image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face acquisition image, and the sizes of the first anchor point frame and the second anchor point frame are different.
It should be understood that under normal conditions, the full face image is generally captured with strong scan directivity, that is, the full face image is mostly located near the center of the face captured image. Therefore, the embodiment of the present specification can map the second anchor block set for the entire face image in the central area of the face acquisition image.
Similarly, if the terminal resource consumption is not considered, the second anchor frame may be mapped to other positions of the face acquisition image in this step, for example, in order to obtain better face recognition performance, and the second anchor frame may be mapped to all positions of the face acquisition image, that is, the second region may include an edge region of the face acquisition image.
S106, face recognition screening is carried out on the first anchor point frame image and the second anchor point frame image, and at least one candidate anchor point frame face image is determined.
It should be understood that after mapping the first anchor frame and the second anchor frame to the face acquisition image, the first anchor frame image and the second anchor frame image obtained by frame selection are not necessarily face images, so face recognition screening is also required.
Alternatively, this step may mechanically identify face images present in the first anchor frame image and the second anchor frame image, i.e. candidate anchor frame face images, based on artificial intelligence techniques. It should be appreciated that the candidate anchor frame face image selected from the first anchor frame image is an incomplete face image, while the candidate anchor frame face image selected from the second anchor frame image may be considered an complete face image.
Specifically, for face recognition screening of the first anchor point frame image, a first face recognition model for recognizing the incomplete face image can be trained in advance based on the sample incomplete face image and a corresponding classification label (the classification label marks each sample incomplete face image as a positive sample or a negative sample). Here, in order to improve the ability of the first face recognition model to recognize the incomplete face image, a sample incomplete face image of the head-shoulder feature may be introduced for training. That is, in the embodiment of the present disclosure, the facial features and the head-shoulder features may be used as the bottom vectors of the first face recognition model, and encoders for extracting the facial features and the head-shoulder features of the image input to the first face recognition model may be provided (the facial features and the head-shoulder features extracted by the encoders are used for inputting the bottom vectors).
Similarly, for face recognition screening of the second anchor block image, a second face recognition model for recognizing the incomplete face image can be supervised and trained in advance based on the sample complete face image and the corresponding classification label. Correspondingly, in the step, the second anchor point frame image is input to the trained second face recognition model to carry out face recognition, and then candidate anchor point frame face images can be obtained through direct screening.
S108, determining the target anchor point frame face image from the candidate anchor point frame faces based on the face size and/or the face distance corresponding to each candidate anchor point frame face image.
It should be appreciated that, in general, there is a high probability that the face image of the candidate anchor frame closest to the acquisition location is the correct face image. Therefore, the candidate anchor point frame face image with the face closest to the acquisition position can be used as the target anchor point frame face image in the step. It should be noted that, the manner of calculating the distance between the candidate anchor frame face image and the acquisition position is not limited in detail herein. By way of example, the distance of the candidate anchor frame face image to the acquisition location may be quantified based on pixel parameters (e.g., pixel average depth values) in the candidate anchor frame face image.
In addition, the candidate anchor point frame face image with the largest face size has a larger probability of being the correct face image. Therefore, the candidate anchor point frame face image with the largest face size can be used as the target anchor point frame face image in the step. Because the candidate anchor frame face image screened from the first anchor frame image belongs to an incomplete face image, the completion of the complete face is required before comparing the face size with the candidate anchor frame face image screened from the second anchor frame image. The completion principle is that the complete face prediction is carried out on the basis of the size of the original face corresponding to the face image of the candidate anchor point frame. As one implementation manner, in the embodiment of the present disclosure, the size of the anchor frame corresponding to the candidate anchor frame image may be directly regarded as the size of the original face, and the anchor frame of the candidate anchor frame face image screened from the first anchor frame image may be enlarged with the aspect ratio of the second anchor frame as a standard (the enlarged aspect ratio is consistent with the standard aspect ratio), so as to obtain the size of the whole face.
Of course, the two manners of determining the face image of the target anchor frame based on the face size and the face distance may be combined, for example, by performing weighted calculation on the face size and the face distance, determining each candidate anchor frame image as the comprehensive probability of the face image of the target anchor frame, and selecting one of the candidate anchor frame images with the highest comprehensive probability as the face image of the target anchor frame. In addition, if there is only one candidate anchor point frame face image selected, the step can directly determine the unique candidate anchor point frame face image to the target anchor point frame face image, so that the calculation process is omitted.
S110, based on the target anchor point frame face image, performing image processing operation related to the face image.
It should be appreciated that the image processing operations described herein may include at least one of the following:
identity recognition operation based on the face image;
Face beautifying operation based on face images;
A body temperature detection operation based on the face image.
The image processing operation may be from a face image application. The face image application may include: face identification applications, face beautification applications, face temperature detection applications, and the like, which are not particularly limited herein.
In addition, in order to avoid that the selected target anchor frame face image is not the face image expected by the user, the user can confirm the target anchor frame face image before carrying out the face image.
Namely, the step initiates a user confirmation prompt aiming at the face image of the target anchor point frame; if user confirmation operation aiming at the target anchor point frame face image is received, executing image processing operation related to the face image based on the target anchor point frame face image; if the user denial operation aiming at the target anchor point frame face image is received, selecting a new target anchor point frame face image from the rest candidate anchor point frame face images, and initiating a user confirmation prompt aiming at the new target anchor point frame face image until the user confirms the target anchor point frame face image or no candidate anchor point frame face image is available for confirmation by the user.
In addition, in order to avoid the influence of the incomplete face image on the face image which is carried out next after the incomplete face image is selected as the target anchor point frame face image, the step can also carry out the full rendering of the complete face image on the target anchor point frame face image, then carry out the face image on the rendered target anchor point frame face image, or display the rendered target anchor point frame face image in a face image acquisition interface so as to remind a user whether to correct the acquisition position. It should be noted here that the scheme of the complementary rendering is not specifically limited herein. As an exemplary introduction, pixel complement rendering may be performed on a face image missing from the target anchor frame face image symmetrically based on pixel information of the face image existing in the target anchor frame face image through an image technique.
Therefore, according to the method of the embodiment of the specification, when the face is acquired, besides the anchor point frames used for capturing the complete face image are mapped in the face acquisition image, the special anchor point frames used for capturing the incomplete face image are mapped in the edge area of the face acquisition image, so that the identification of the incomplete face image is introduced, and the face image is executed by intelligently selecting the target anchor point frame face image with higher possibility from the candidate anchor point frame face images selected from the anchor point frame frames according to the face size and/or the face distance, so that the dead zone that the face image cannot take effect on the incomplete face image is solved, the face selection accuracy in many special scenes is improved on the basis that the original performance is not influenced, and the face image can more accord with the expectations of users.
The method of the embodiment of the present description will be described below with reference to an application scenario of face payment. The corresponding flow is as follows:
s201, after face collection is started by the face payment application, mapping a first anchor frame for capturing an incomplete face image to an edge area of a face collection image, and simultaneously, shooting a second anchor frame for capturing a complete face image to all areas of the face collection image.
The first anchor frame adopts the common aspect ratio of the non-complete face image, and only the edge area is mapped to the first anchor frame so as to save final calculation consumption. The method comprises the steps of selecting a non-complete face image in a face acquisition image through a large number of first anchor blocks in an attempt frame, and similarly selecting a complete face image in the face acquisition image through a large number of second anchor blocks in an attempt frame.
S202, face recognition is carried out on a first anchor point frame image defined by the first anchor point frame and a second anchor point frame image defined by the second anchor point frame, and candidate anchor point frame face images are determined.
After determining the candidate anchor point frame face images, the step can also be performed with de-duplication, for example, only one of the candidate anchor point frame face images with the contact ratio reaching a preset standard is reserved.
And S203, carrying out anchor point frame correction on each candidate anchor point frame facial image again.
Specifically, the step can be implemented by artificial intelligence technology to mechanically correct the anchor point frame repair corresponding to the face image of each candidate anchor point frame. The first anchor point frame image is input into a preset first anchor point frame adjustment model to correct the size and/or the position of a first anchor point frame corresponding to the first anchor point frame image, wherein the first anchor point frame adjustment model is obtained based on training of a sample incomplete face image divided by the first anchor point frame and a corresponding classification label. And similarly, inputting the second anchor frame image into a preset second anchor frame adjustment model so as to correct the size and/or the position of a second anchor frame corresponding to the second anchor frame image, wherein the second anchor frame adjustment model is obtained based on training of the sample whole face image divided by the second anchor frame and the corresponding classification label.
S204, judging whether a candidate anchor block facial image is screened from the first anchor block facial image; if yes, then S205 is executed; otherwise, S205 is skipped to execute S206.
S205, predicting the whole face size of the face image of the candidate anchor block frame selected from the first anchor block frame image.
Here, in order to simplify the calculation, the face size represented by the candidate anchor frame face image screened from the second anchor frame image may be regarded as the full face size, that is, the prediction of the full face size is not performed on the candidate anchor frame face image screened from the second anchor frame image.
S206, selecting one of the candidate anchor frame face images with the largest face size of the whole face as a target anchor frame face image.
It should be noted that, even if the candidate anchor frame face image belongs to the incomplete face image, the probability of being selected as the target anchor frame face image after the prediction of the complete face size is larger than that of the candidate anchor frame face image belonging to the complete face image. In addition, the application scene can also select the candidate anchor point frame face image with the face position closest to the acquisition position as the target anchor point frame face image, and the principle is introduced above, so that the description is not repeated here.
S207, face verification of face payment (application) is performed based on the target anchor point frame face image.
And carrying out face verification of face payment based on the target anchor point frame face image.
It should be understood that the application scenario is an exemplary introduction to the method of the embodiment of the present invention, and the method of the embodiment may also be applicable to other application scenarios such as face and body temperature detection, and a map Yan Xiu, which are not described herein in detail. Appropriate changes to the steps of the method of the embodiments may be made without departing from the principles described herein, and such changes are also to be considered as protective scope of the embodiments of the invention.
In addition, corresponding to the method shown in fig. 1, the embodiment of the invention also provides a facial image processing device. Fig. 3 is a schematic structural diagram of a face image processing apparatus 300 according to an embodiment of the present invention, including:
The first anchor block mapping module 310 maps a first anchor block for capturing an incomplete face image in a first area in the face acquisition image to obtain a first anchor block image, wherein the first area comprises an edge area of the face acquisition image.
A second anchor frame mapping module 320, configured to map a second anchor frame for capturing a complete face image in a second area in the face acquisition image, to obtain a second anchor frame image, where the second area includes a central area of the face acquisition image, and the sizes of the first anchor frame and the second anchor frame are different;
the face image recognition module 330 performs face recognition screening on the first anchor frame image and the second anchor frame image to determine at least one candidate anchor frame face image;
The face image selecting module 340 determines a target anchor frame face image from the candidate anchor frame faces based on the face sizes and/or the face distances corresponding to the candidate anchor frame face images;
The face image module 350 performs image processing operations related to the face image based on the target anchor point frame face image.
According to the scheme of the embodiment of the specification, when the face acquisition is carried out, besides the anchor point frames used for capturing the complete face image are mapped in the face acquisition image, the special anchor point frames used for capturing the incomplete face image are mapped in the edge area of the face acquisition image, so that the identification of the incomplete face image is introduced, and the face image is executed by intelligently selecting the target anchor point frame face image with higher possibility from the candidate anchor point frame face images selected from the anchor point frame frames according to the face size and/or the face distance, so that the blind area that the face image cannot be effective on the incomplete face image is made up, the face selection accuracy in many special scenes is improved on the basis that the original application performance is not influenced, and the face image can be more in line with the expectations of users.
Optionally, the face image recognition module 330 inputs the first anchor point frame image to a first face recognition model for face recognition, and determines candidate anchor point frame face images obtained by screening the first face recognition model, where the first face recognition model is obtained by training based on the sample incomplete face image and the corresponding classification label.
The first face recognition model is provided with an underlying vector of facial features and head-shoulder features, and an encoder for extracting the facial features and the head-shoulder features of the image input to the first face recognition model, wherein the facial features and the head-shoulder features extracted by the encoder are used for inputting the underlying vector.
Optionally, the face image recognition module 330 inputs the second anchor point frame image to a second face recognition model for face recognition, and determines candidate anchor point frame face images obtained by screening the second face recognition model, where the second face recognition model is obtained by training based on the sample complete face image and the corresponding classification label.
Optionally, before performing face recognition screening on the first anchor frame image and the second anchor frame image, the face image recognition module 330 further inputs the first anchor frame image to a first anchor frame adjustment model to correct the size and/or the position of the first anchor frame corresponding to the first anchor frame image, where the first anchor frame adjustment model is obtained based on training the sample incomplete face image divided by the first anchor frame and the corresponding classification label; and/or inputting the second anchor frame image into a second anchor frame adjustment model to correct the size and/or the position of a second anchor frame corresponding to the second anchor frame image, wherein the second anchor frame adjustment model is obtained based on training of the sample whole face image divided by the second anchor frame and the corresponding classification label.
Optionally, the face image selecting module 340 selects, from the candidate anchor frame face images, one of the face size being the largest or the face being closest to the acquisition location as the target anchor frame face image. For example: on the basis of the size of the original face corresponding to the candidate anchor point frame face image, carrying out complete face complementation on each candidate anchor point frame face image to obtain the face size of the complete face, and selecting one with the largest face size of the complete face from the candidate anchor point frame face images as a target anchor point frame face image; or determining the distance from the face position in each candidate anchor point frame face image to the acquisition position based on the pixel depth information corresponding to each candidate anchor point frame face image, and selecting one of the face positions closest to the acquisition position as a target anchor point frame face image.
In addition, if there is only one candidate anchor frame face image, the face image selection module 340 takes the candidate anchor frame face image directly as the target anchor frame face image.
Optionally, the face image module 350 specifically initiates a user confirmation prompt for the target anchor frame face image; if a user confirmation operation aiming at the target anchor point frame face image is received, executing image processing operation related to the face image based on the target anchor point frame face image; and if the user denial operation aiming at the target anchor point frame face image is received, selecting other candidate anchor point frame face images as new target anchor point frame face images, and initiating a user confirmation prompt aiming at the new target anchor point frame face images.
It is apparent that the facial image processing apparatus according to the embodiment of the present disclosure may be used as an execution subject of the method shown in fig. 1, and thus may implement the steps and functions of the method implemented in fig. 1 and 2. Since the principle is the same, the description is not repeated here.
Fig. 4 is a schematic structural view of an electronic device according to an embodiment of the present specification. Referring to fig. 4, at the hardware level, the electronic device includes a processor, and optionally an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, network interface, and memory may be interconnected by an internal bus, which may be an ISA (Industry Standard Architecture ) bus, a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus, or EISA (Extended Industry Standard Architecture ) bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 4, but not only one bus or type of bus.
And the memory is used for storing programs. In particular, the program may include program code including computer-operating instructions. The memory may include memory and non-volatile storage and provide instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory to the memory and then runs the computer program to form the face image processing device on a logic level. The processor is used for executing the programs stored in the memory and is specifically used for executing the following operations:
And mapping a first anchor point frame for capturing the incomplete face image in a first area in the face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image. And
And mapping a second anchor point frame for capturing the whole face image in a second area in the face acquisition image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face acquisition image, and the sizes of the first anchor point frame and the second anchor point frame are different.
And carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image, and determining at least one candidate anchor point frame face image.
And determining the target anchor point frame face image from the candidate anchor point frame face based on the face size and/or the face distance corresponding to each candidate anchor point frame face image.
And executing image processing operation related to the face image based on the face image of the target anchor point frame.
According to the electronic equipment, when the face is acquired, not only the anchor point frames used for capturing the complete face image are mapped in the face acquisition image, but also the special anchor point frames used for capturing the incomplete face image are mapped in the edge area of the face acquisition image, so that the identification of the incomplete face image is introduced, and the face image is executed by intelligently selecting the target anchor point frame face image with higher possibility from the candidate anchor point frame face images selected from the anchor point frame frames according to the face size and/or the face distance, so that the blind area that the face image cannot take effect on the incomplete face image is made up, and on the basis of not affecting the original application performance, the face selection accuracy in many special scenes is improved, and the face image can be more in line with the expectations of users.
The forensic method disclosed in the embodiment shown in fig. 1 of the present specification can be applied to a processor or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of this specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
It should be understood that the electronic device of the embodiment of the present disclosure may implement the functions of the embodiment shown in fig. 1 and 2 of the face image processing method described above. Since the principle is the same, the description is not repeated here.
Of course, in addition to the software implementation, the electronic device in this specification does not exclude other implementations, such as a logic device or a combination of software and hardware, that is, the execution subject of the following process is not limited to each logic unit, but may also be hardware or a logic device.
Furthermore, the present specification embodiment also proposes a computer-readable storage medium storing one or more programs including instructions.
Wherein the instructions, when executed by a portable electronic device comprising a plurality of applications, enable the portable electronic device to perform the method of the embodiment shown in fig. 1, and in particular to perform the steps of:
And mapping a first anchor point frame for capturing the incomplete face image in a first area in the face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image. And
And mapping a second anchor point frame for capturing the whole face image in a second area in the face acquisition image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face acquisition image, and the sizes of the first anchor point frame and the second anchor point frame are different.
And carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image, and determining at least one candidate anchor point frame face image.
And determining the target anchor point frame face image from the candidate anchor point frame face based on the face size and/or the face distance corresponding to each candidate anchor point frame face image.
And executing image processing operation related to the face image based on the face image of the target anchor point frame.
According to the electronic equipment, when the face is acquired, not only the anchor point frames used for capturing the complete face image are mapped in the face acquisition image, but also the special anchor point frames used for capturing the incomplete face image are mapped in the edge area of the face acquisition image, so that the identification of the incomplete face image is introduced, and the face image is executed by intelligently selecting the target anchor point frame face image with higher possibility from the candidate anchor point frame face images selected from the anchor point frame frames according to the face size and/or the face distance, so that the blind area that the face image cannot take effect on the incomplete face image is made up, and on the basis of not affecting the original application performance, the face selection accuracy in many special scenes is improved, and the face image can be more in line with the expectations of users.
It will be apparent to those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing is merely an example of the present specification and is not intended to limit the present specification. Various modifications and alterations will occur to those skilled in the art to which the present disclosure pertains. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present description. Moreover, all other embodiments obtained by persons of ordinary skill in the art without inventive effort are intended to be within the scope of this document.
Claims (14)
1. A face image processing method, comprising:
mapping a first anchor point frame for capturing an incomplete face image in a first area in a face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image; and
Mapping a second anchor point frame for capturing the whole face image in a second area in the face acquisition image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face acquisition image, and the sizes of the first anchor point frame and the second anchor point frame are different;
performing face recognition screening on the first anchor frame image and the second anchor frame image to determine at least one candidate anchor frame face image;
determining a target anchor point frame face image from the candidate anchor point frame faces based on the face sizes and/or the face distances corresponding to the candidate anchor point frame face images;
And executing image processing operation related to the face image based on the face image of the target anchor point frame.
2. The method according to claim 1,
Face recognition screening is carried out on the first anchor point frame image and the second anchor point frame image, and at least one candidate anchor point frame face image is determined, and the method comprises the following steps:
and inputting the first anchor point frame image into a first face recognition model to carry out face recognition, and determining candidate anchor point frame face images obtained by screening the first face recognition model, wherein the first face recognition model is obtained by training based on sample incomplete face images and corresponding classification labels.
3. The method according to claim 2,
The first face recognition model is provided with a bottom layer vector of facial features and head and shoulder features, and an encoder for extracting the facial features and the head and shoulder features of the image input to the first face recognition model, wherein the facial features and the head and shoulder features extracted by the encoder are used for inputting the bottom layer vector.
4. The method according to claim 1,
Face recognition screening is carried out on the first anchor point frame image and the second anchor point frame image, and at least one candidate anchor point frame face image is determined, and the method comprises the following steps:
And inputting the second anchor point frame image into a second face recognition model to carry out face recognition, and determining candidate anchor point frame face images obtained by screening the second face recognition model, wherein the second face recognition model is obtained by training based on the sample complete face images and the corresponding classification labels.
5. The method according to claim 1,
Before face recognition screening is performed on the first anchor frame image and the second anchor frame image, the method further comprises:
Inputting a first anchor point frame image into a first anchor point frame adjustment model to correct the size and/or the position of a first anchor point frame corresponding to the first anchor point frame image, wherein the first anchor point frame adjustment model is obtained based on training of a sample incomplete face image divided by the first anchor point frame and a corresponding classification label;
And/or the number of the groups of groups,
And inputting the second anchor frame image into a second anchor frame adjustment model to correct the size and/or the position of a second anchor frame corresponding to the second anchor frame image, wherein the second anchor frame adjustment model is obtained based on training of the sample whole face image divided by the second anchor frame and the corresponding classification label.
6. The method according to claim 1,
Based on the face size or face depth corresponding to each candidate anchor point frame face image, determining a target anchor point frame face image from the candidate anchor point frame faces, including:
and selecting one of the face with the largest face size or the face closest to the acquisition position from the candidate anchor frame face images as a target anchor frame face image.
7. The method according to claim 6, wherein the method comprises,
Selecting one of the face image with the largest face size or the face position closest to the acquisition position from the candidate anchor point frame face images as a target anchor point frame face image, wherein the method comprises the following steps:
On the basis of the size of the original face corresponding to the candidate anchor point frame face image, carrying out complete face complementation on each candidate anchor point frame face image to obtain the face size of the complete face, and selecting one with the largest face size of the complete face from the candidate anchor point frame face images as a target anchor point frame face image; or alternatively
And determining the distance from the face position in each candidate anchor point frame face image to the acquisition position based on the pixel depth information corresponding to each candidate anchor point frame face image, and selecting one of the face positions closest to the acquisition position as a target anchor point frame face image.
8. The method according to claim 1 to 7,
Based on the face size and/or face distance corresponding to each candidate anchor point frame face image, determining a target anchor point frame face image from the candidate anchor point frame faces, including:
if the candidate anchor point frame face image is one or only one, the candidate anchor point frame face image is directly used as the target anchor point frame face image.
9. The method according to claim 1 to 7,
If the anchor point frame face image is more than one, executing the image processing operation related to the face image based on the target anchor point frame face image, and further comprising:
Initiating a user confirmation prompt for the target anchor point frame face image;
If a user confirmation operation aiming at the target anchor point frame face image is received, executing image processing operation related to the face image based on the target anchor point frame face image;
And if the user denial operation aiming at the target anchor point frame face image is received, selecting other candidate anchor point frame face images as new target anchor point frame face images, and initiating a user confirmation prompt aiming at the new target anchor point frame face images.
10. The method according to claim 1 to 7,
The second region further comprises an edge region of the face acquisition image.
11. The method of any of claims 1-7, the face image-related image processing operations comprising at least one of:
identity recognition operation based on the face image;
Face beautifying operation based on face images;
A body temperature detection operation based on the face image.
12. A face image processing apparatus comprising:
The first anchor point frame mapping module is used for mapping a first anchor point frame for capturing an incomplete face image in a first area in the face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image;
The second anchor frame mapping module is used for mapping a second anchor frame used for capturing the whole face image in a second area in the face acquisition image to obtain a second anchor frame image, the second area comprises a central area of the face acquisition image, and the sizes of the first anchor frame and the second anchor frame are different;
the face image recognition module is used for carrying out face recognition screening on the first anchor point frame image and the second anchor point frame image and determining at least one candidate anchor point frame face image;
The face image selection module is used for determining a target anchor point frame face image from candidate anchor point frame faces based on face sizes and/or face distances corresponding to the candidate anchor point frame face images;
and the face image module is used for executing image processing operation related to the face image based on the target anchor point frame face image.
13. An electronic device includes: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor:
mapping a first anchor point frame for capturing an incomplete face image in a first area in a face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image; and
Mapping a second anchor point frame for capturing the whole face image in a second area in the face acquisition image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face acquisition image, and the sizes of the first anchor point frame and the second anchor point frame are different;
performing face recognition screening on the first anchor frame image and the second anchor frame image to determine at least one candidate anchor frame face image;
determining a target anchor point frame face image from the candidate anchor point frame faces based on the face sizes and/or the face distances corresponding to the candidate anchor point frame face images;
And executing image processing operation related to the face image based on the face image of the target anchor point frame.
14. A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
mapping a first anchor point frame for capturing an incomplete face image in a first area in a face acquisition image to obtain a first anchor point frame image, wherein the first area comprises an edge area of the face acquisition image; and
Mapping a second anchor point frame for capturing the whole face image in a second area in the face acquisition image to obtain a second anchor point frame image, wherein the second area comprises a central area of the face acquisition image, and the sizes of the first anchor point frame and the second anchor point frame are different;
performing face recognition screening on the first anchor frame image and the second anchor frame image to determine at least one candidate anchor frame face image;
determining a target anchor point frame face image from the candidate anchor point frame faces based on the face sizes and/or the face distances corresponding to the candidate anchor point frame face images;
And executing image processing operation related to the face image based on the face image of the target anchor point frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111150500.XA CN113850210B (en) | 2021-09-29 | 2021-09-29 | Face image processing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111150500.XA CN113850210B (en) | 2021-09-29 | 2021-09-29 | Face image processing method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113850210A CN113850210A (en) | 2021-12-28 |
CN113850210B true CN113850210B (en) | 2024-05-17 |
Family
ID=78977121
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111150500.XA Active CN113850210B (en) | 2021-09-29 | 2021-09-29 | Face image processing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113850210B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107423707A (en) * | 2017-07-25 | 2017-12-01 | 深圳帕罗人工智能科技有限公司 | A kind of face Emotion identification method based under complex environment |
CN107679514A (en) * | 2017-10-20 | 2018-02-09 | 维沃移动通信有限公司 | A kind of face identification method and electronic equipment |
CN109409210A (en) * | 2018-09-11 | 2019-03-01 | 北京飞搜科技有限公司 | A kind of method for detecting human face and system based on SSD frame |
WO2019145578A1 (en) * | 2018-06-11 | 2019-08-01 | Fotonation Limited | Neural network image processing apparatus |
CN111401283A (en) * | 2020-03-23 | 2020-07-10 | 北京达佳互联信息技术有限公司 | Face recognition method and device, electronic equipment and storage medium |
CN111652051A (en) * | 2020-04-21 | 2020-09-11 | 高新兴科技集团股份有限公司 | Face detection model generation method, device, equipment and storage medium |
CN112541483A (en) * | 2020-12-25 | 2021-03-23 | 三峡大学 | Dense face detection method combining YOLO and blocking-fusion strategy |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109151180B (en) * | 2018-07-27 | 2020-09-01 | 维沃移动通信有限公司 | Object identification method and mobile terminal |
CN112597837B (en) * | 2020-12-11 | 2024-05-28 | 北京百度网讯科技有限公司 | Image detection method, apparatus, device, storage medium, and computer program product |
-
2021
- 2021-09-29 CN CN202111150500.XA patent/CN113850210B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107423707A (en) * | 2017-07-25 | 2017-12-01 | 深圳帕罗人工智能科技有限公司 | A kind of face Emotion identification method based under complex environment |
CN107679514A (en) * | 2017-10-20 | 2018-02-09 | 维沃移动通信有限公司 | A kind of face identification method and electronic equipment |
WO2019145578A1 (en) * | 2018-06-11 | 2019-08-01 | Fotonation Limited | Neural network image processing apparatus |
CN109409210A (en) * | 2018-09-11 | 2019-03-01 | 北京飞搜科技有限公司 | A kind of method for detecting human face and system based on SSD frame |
CN111401283A (en) * | 2020-03-23 | 2020-07-10 | 北京达佳互联信息技术有限公司 | Face recognition method and device, electronic equipment and storage medium |
CN111652051A (en) * | 2020-04-21 | 2020-09-11 | 高新兴科技集团股份有限公司 | Face detection model generation method, device, equipment and storage medium |
CN112541483A (en) * | 2020-12-25 | 2021-03-23 | 三峡大学 | Dense face detection method combining YOLO and blocking-fusion strategy |
Non-Patent Citations (1)
Title |
---|
移动设备网络安全下人脸终端身份识别仿真;韩毓;;计算机仿真;20171015(第10期);352-356 * |
Also Published As
Publication number | Publication date |
---|---|
CN113850210A (en) | 2021-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107220640B (en) | Character recognition method, character recognition device, computer equipment and computer-readable storage medium | |
CN108885699A (en) | Character identifying method, device, storage medium and electronic equipment | |
CN109409277B (en) | Gesture recognition method and device, intelligent terminal and computer storage medium | |
CN110427852B (en) | Character recognition method and device, computer equipment and storage medium | |
CN105678242B (en) | Focusing method and device under hand-held certificate mode | |
CA2745730C (en) | Method, apparatus and computer program product for providing an orientation independent face detector | |
CN111681256A (en) | Image edge detection method and device, computer equipment and readable storage medium | |
CN111291749B (en) | Gesture recognition method and device and robot | |
CN110232381B (en) | License plate segmentation method, license plate segmentation device, computer equipment and computer readable storage medium | |
CN113221601A (en) | Character recognition method, device and computer readable storage medium | |
WO2018058573A1 (en) | Object detection method, object detection apparatus and electronic device | |
CN107886093B (en) | Character detection method, system, equipment and computer storage medium | |
CN112036342B (en) | Document snapshot method, device and computer storage medium | |
CN111104826A (en) | License plate character recognition method and device and electronic equipment | |
CN117854160A (en) | Human face living body detection method and system based on artificial multi-mode and fine-granularity patches | |
CN113850210B (en) | Face image processing method and device and electronic equipment | |
CN111091089B (en) | Face image processing method and device, electronic equipment and storage medium | |
CN111160353A (en) | License plate recognition method, device and equipment | |
CN116129484A (en) | Method, device, electronic equipment and storage medium for model training and living body detection | |
CN116052230A (en) | Palm vein recognition method, device, equipment and storage medium | |
CN113538337B (en) | Detection method, detection device and computer readable storage medium | |
CN115311630A (en) | Method and device for generating distinguishing threshold, training target recognition model and recognizing target | |
US20210019486A1 (en) | Fingerprint processing with liveness detection | |
CN114596638A (en) | Face living body detection method, device and storage medium | |
CN108694347B (en) | Image processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |