CN110263695B - Face position acquisition method and device, electronic equipment and storage medium - Google Patents

Face position acquisition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110263695B
CN110263695B CN201910517973.5A CN201910517973A CN110263695B CN 110263695 B CN110263695 B CN 110263695B CN 201910517973 A CN201910517973 A CN 201910517973A CN 110263695 B CN110263695 B CN 110263695B
Authority
CN
China
Prior art keywords
screen
target
acquiring
mode
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910517973.5A
Other languages
Chinese (zh)
Other versions
CN110263695A (en
Inventor
曹占魁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910517973.5A priority Critical patent/CN110263695B/en
Publication of CN110263695A publication Critical patent/CN110263695A/en
Application granted granted Critical
Publication of CN110263695B publication Critical patent/CN110263695B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a method and a device for acquiring the position of a face part, electronic equipment and a storage medium, and relates to the technical field of computers. The method comprises the following steps: acquiring the proportion of a face area of a face outside a first screen picture to the face; when the proportion meets the target condition, acquiring a first position of a target part in the second screen picture and a first target area including the target part in the second screen picture; acquiring position change information of image feature points in a first target area and a second target area, wherein the second target area is an area which corresponds to the first target area in a second screen and comprises a target part; and acquiring a second position of the target part in the first screen picture based on the first position and the position change information. According to the method and the device, when the screen picture only comprises a part of human face, the accurate position of the human face part can be obtained, the accuracy is good, the recognition condition does not exist, and the recognition effect is good.

Description

Face position acquisition method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for obtaining a position of a face, an electronic device, and a storage medium.
Background
With the development of computer technology, face recognition technology is more and more widely applied, for example, the face recognition technology can be combined with Augmented Reality (AR) technology to perform face recognition on a screen and add other virtual objects on a face part.
In the related art, the position of the face part is usually obtained by collecting a screen picture and directly performing face recognition on the collected screen picture to obtain the position of the face part.
In the method, sometimes, only part of the face is located in the screen, in this case, the face recognition is directly performed on the screen, the position of the obtained face part may be inaccurate due to incomplete parts in the screen, and a recognition failure may occur, so that the recognition effect is poor.
Disclosure of Invention
The present disclosure provides a method and an apparatus for obtaining a position of a face, an electronic device, and a storage medium, so as to at least solve the problems of inaccurate recognition result, recognition failure, and poor recognition effect in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, a method for obtaining a position of a face part is provided, including:
acquiring the proportion of a face area of a face outside a first screen picture to the face;
when the proportion meets a target condition, acquiring a first position of a target part in a second screen picture and a first target area including the target part in the second screen picture, wherein the first screen picture and the second screen picture are screen pictures acquired at different moments;
acquiring position change information of image feature points in the first target area and a second target area, wherein the second target area is an area, corresponding to the first target area, in the second screen picture and including the target part;
and acquiring a second position of the target part in the first screen picture based on the first position and the position change information.
In a possible implementation manner, the obtaining a ratio of a face area of the human face located outside the first screen to the human face includes any one of:
performing face key point detection on the first screen picture to obtain the number of key points of the face in the first screen picture, and acquiring the ratio of the target number to the difference value of the number to the target number;
the method comprises the steps of detecting key points of a face of a first screen picture to obtain the positions of key points of the face in the first screen picture and the positions of edge key points of the face, obtaining a first area and a second area of a first area defined by the key points of the face in the first screen picture according to the positions of the key points of the face in the first screen picture and the positions of the edge key points of the face, obtaining the ratio of the difference value of the second area and the first area to the second area, and obtaining the second area as the area of the second area defined by the edge key points.
In a possible implementation manner, the obtaining of the position change information of the image feature points in the first target region and the second target region includes any one of:
respectively extracting features of the first target area and the second target area to obtain the position of a first image feature point in the first target area and the position of a second image feature point in the second target area, and acquiring the position change information based on the positions of the first image feature point and the second image feature point;
and matching the first screen picture with the second screen picture to obtain the similarity of a first target area in the first screen picture and a second target area in the second screen picture, and acquiring the position change information of the image feature points in the first target area and the second target area based on the similarity.
In a possible implementation manner, the performing feature extraction on the first target region and the second target region respectively to obtain a position of a first image feature point in the first target region and a position of a second image feature point in the second target region includes:
respectively extracting features of the first target area and the second target area to obtain the positions of a plurality of first image feature points in the first target area and the positions of a plurality of second image feature points in the second target area;
the acquiring the position change information based on the position of the first image feature point and the position of the second image feature point includes:
and acquiring average position change information between the plurality of first image feature points and the plurality of second image feature points according to the positions of the plurality of first image feature points in the first target area and the positions of the plurality of second image feature points in the second target area, and taking the average position change information as the position change information of the image feature points in the first target area and the second target area.
In a possible implementation manner, the obtaining a first position of a target portion in a second screen and a first target area including the target portion in the second screen when the ratio satisfies a target condition includes any one of:
when the acquisition mode is a first mode and the proportion is larger than a first proportion threshold value, executing the step of acquiring a first position of a target part in a second screen picture and a first target area including the target part in the second screen picture, wherein the first mode is a mode of acquiring a second position based on a face recognition mode;
and when the acquisition mode is a second mode and the ratio is greater than a second ratio threshold, executing the step of acquiring a first position of a target part in a second screen and a first target area including the target part in the second screen, wherein the second ratio threshold is smaller than the first ratio threshold, and the second mode is a mode of acquiring a second position based on position change information of the image feature point.
In one possible implementation manner, when the ratio satisfies a target condition, the acquiring a first position of a target portion in a second screen and a first target area including the target portion in the second screen further includes:
when an acquisition mode is a first mode and the ratio is greater than a first ratio threshold, switching the acquisition mode from the first mode to the second mode.
In a possible implementation manner, after obtaining a ratio of a face area of the human face outside the first screen to the human face, the method further includes any one of:
when the acquisition mode is a first mode and the proportion is less than or equal to a first proportion threshold value, carrying out face recognition on the first screen picture to obtain a second position of the target part in the first screen picture, wherein the first mode is a mode for acquiring the second position based on a face recognition mode;
and when the acquisition mode is a second mode and the ratio is smaller than or equal to a second ratio threshold, performing face recognition on the first screen to obtain a second position of the target part in the first screen, wherein the second ratio threshold is smaller than the first ratio threshold, and the second mode is a mode of acquiring the second position based on position change information of the image feature point.
In one possible implementation manner, after obtaining a ratio of a face area of the human face outside the first screen to the human face, the method further includes:
when the acquisition mode is a second mode and the ratio is less than or equal to a second ratio threshold, switching the acquisition mode from the second mode to the first mode.
In one possible implementation, the method further includes:
when the acquisition mode is a first mode and the proportion is greater than a first proportion threshold value, or when the acquisition mode is a second mode and the proportion is less than or equal to a second proportion threshold value, carrying out face recognition on the first screen picture to obtain a third position of the target part in the first screen picture;
executing the step of acquiring a first position, a first target area and position change information of a target part in a second screen, and acquiring a fourth position of the target part in the first screen based on the first position and the position change information;
and acquiring a second position of the target part in the first screen picture based on the third position and the fourth position.
In one possible implementation manner, the acquiring, when the ratio satisfies a target condition, a first position of a target portion in a second screen and a first target area including the target portion in the second screen includes:
when the ratio is larger than a first ratio threshold, executing the step of acquiring a first position of a target part in a second screen picture and a first target area including the target part in the second screen picture;
after the obtaining of the proportion of the face area of the face outside the first screen picture to the face, the method further includes:
and when the proportion is smaller than or equal to a second proportion threshold value, carrying out face recognition on the first screen picture to obtain a second position of the target part in the first screen picture.
In one possible implementation, the first proportional threshold is the same as the second proportional threshold;
when the ratio is smaller than or equal to a second ratio threshold, performing face recognition on the first screen to obtain a second position of the target part in the first screen, including:
and when the ratio is smaller than or equal to the first ratio threshold, executing the step of carrying out face recognition on the first screen picture to obtain a second position of the target part in the first screen picture.
In one possible implementation, the first scaling threshold is greater than the second scaling threshold;
the method further comprises the following steps:
when the ratio is smaller than the first ratio threshold and larger than the second ratio threshold, executing the step of carrying out face recognition on the first screen picture to obtain a third position of the target part in the first screen picture;
executing the step of acquiring a first position, a first target area and position change information of a target part in a second screen, and acquiring a fourth position of the target part in the first screen based on the first position and the position change information;
and acquiring a second position of the target part in the first screen picture based on the third position and the fourth position.
In one possible implementation manner, the acquiring the second position of the target portion in the first screen based on the third position and the fourth position includes:
and carrying out weighted summation on the third position and the fourth position to obtain a second position of the target part in the first screen picture.
In one possible implementation, the weights of the third and fourth locations are determined based on the relationship of the ratio to a first ratio threshold; or, the weights of the third and fourth locations are determined based on the ratio in relation to a second ratio threshold; or the weights of the third position and the fourth position are determined based on the relation between the current system time and the mode switching duration.
In one possible implementation manner, the obtaining the second position of the target portion in the first screen based on the first position and the position change information includes:
and acquiring a sum of the first position change information and the second position change information, and taking the sum as a second position of the target part in the first screen picture.
In one possible implementation, before the acquiring the first position of the target portion in the second screen and the first target area including the target portion in the second screen, the method further includes any one of:
acquiring a previous screen picture of the first screen picture as the second screen picture;
acquiring a first frame of screen picture in the collected multi-frame screen pictures as the second screen picture;
and acquiring the last frame of screen picture of which the proportion is smaller than a first proportion threshold value in the collected multi-frame screen pictures as the second screen picture.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for obtaining a position of a human face, including:
a scale acquisition unit configured to perform acquisition of a scale of a face area of a face located outside a first screen to the face;
the position acquisition unit is configured to acquire a first position of a target part in a second screen picture and a first target area including the target part in the second screen picture when the proportion meets a target condition, wherein the first screen picture and the second screen picture are screen pictures acquired at different moments;
an information acquisition unit configured to perform acquisition of position change information of image feature points in the first target region and a second target region, the second target region being a region of the first target region corresponding to the second screen including the target portion;
the position acquiring unit is further configured to perform acquiring a second position of the target portion in the first screen based on the first position and the position change information.
In one possible implementation, the proportion obtaining unit is configured to perform any one of:
performing face key point detection on the first screen picture to obtain the number of key points of the face in the first screen picture, and acquiring the ratio of the target number to the difference value of the number to the target number;
the method comprises the steps of detecting key points of a face of a first screen picture to obtain the positions of key points of the face in the first screen picture and the positions of edge key points of the face, obtaining a first area and a second area of a first area defined by the key points of the face in the first screen picture according to the positions of the key points of the face in the first screen picture and the positions of the edge key points of the face, obtaining the ratio of the difference value of the second area and the first area to the second area, and obtaining the second area as the area of the second area defined by the edge key points.
In one possible implementation, the information obtaining unit is configured to perform any one of:
respectively extracting features of the first target area and the second target area to obtain the position of a first image feature point in the first target area and the position of a second image feature point in the second target area, and acquiring the position change information based on the positions of the first image feature point and the second image feature point;
and matching the first screen picture with the second screen picture to obtain the similarity of a first target area in the first screen picture and a second target area in the second screen picture, and acquiring the position change information of the image feature points in the first target area and the second target area based on the similarity.
In one possible implementation manner, the information obtaining unit is configured to perform feature extraction on the first target area and the second target area respectively to obtain positions of a plurality of first image feature points in the first target area and positions of a plurality of second image feature points in the second target area;
the information acquisition unit is configured to perform acquiring average position change information between a plurality of first image feature points and a plurality of second image feature points in the first target region as position change information of the image feature points in the first target region and the second target region, according to positions of the plurality of first image feature points in the first target region and positions of the plurality of second image feature points in the second target region.
In one possible implementation, the location acquisition unit is configured to perform any one of:
when the acquisition mode is a first mode and the proportion is larger than a first proportion threshold value, executing the step of acquiring a first position of a target part in a second screen picture and a first target area including the target part in the second screen picture, wherein the first mode is a mode of acquiring a second position based on a face recognition mode;
and when the acquisition mode is a second mode and the ratio is greater than a second ratio threshold, executing the step of acquiring a first position of a target part in a second screen and a first target area including the target part in the second screen, wherein the second ratio threshold is smaller than the first ratio threshold, and the second mode is a mode of acquiring a second position based on position change information of the image feature point.
In one possible implementation, the position obtaining unit is further configured to perform switching the obtaining mode from the first mode to the second mode when the obtaining mode is the first mode and the ratio is greater than a first ratio threshold.
In one possible implementation, the location acquisition unit is further configured to perform any one of:
when the acquisition mode is a first mode and the proportion is less than or equal to a first proportion threshold value, carrying out face recognition on the first screen picture to obtain a second position of the target part in the first screen picture, wherein the first mode is a mode for acquiring the second position based on a face recognition mode;
and when the acquisition mode is a second mode and the ratio is smaller than or equal to a second ratio threshold, performing face recognition on the first screen to obtain a second position of the target part in the first screen, wherein the second ratio threshold is smaller than the first ratio threshold, and the second mode is a mode of acquiring the second position based on position change information of the image feature point.
In one possible implementation, the apparatus further includes:
a first mode switching unit configured to perform switching of an acquisition mode from a second mode to the first mode when the acquisition mode is the second mode and the ratio is less than or equal to a second ratio threshold.
In one possible implementation, the position obtaining unit is further configured to perform:
when the acquisition mode is a first mode and the proportion is greater than a first proportion threshold value, or when the acquisition mode is a second mode and the proportion is less than or equal to a second proportion threshold value, carrying out face recognition on the first screen picture to obtain a third position of the target part in the first screen picture;
executing the step of acquiring a first position, a first target area and position change information of a target part in a second screen, and acquiring a fourth position of the target part in the first screen based on the first position and the position change information;
and acquiring a second position of the target part in the first screen picture based on the third position and the fourth position.
In one possible implementation, the position acquiring unit is configured to perform the step of acquiring a first position of a target portion in a second screen and a first target area including the target portion in the second screen when the ratio is greater than a first ratio threshold;
the position acquisition unit is further configured to perform face recognition on the first screen image when the ratio is smaller than or equal to a second ratio threshold value, so as to obtain a second position of the target part in the first screen image.
In one possible implementation, the first proportional threshold is the same as the second proportional threshold;
the position acquisition unit is configured to execute the step of performing face recognition on the first screen picture to obtain a second position of the target part in the first screen picture when the ratio is smaller than or equal to the first ratio threshold.
In one possible implementation, the first scaling threshold is greater than the second scaling threshold;
the position acquisition unit is configured to perform:
when the ratio is smaller than the first ratio threshold and larger than the second ratio threshold, executing the step of carrying out face recognition on the first screen picture to obtain a third position of the target part in the first screen picture;
executing the step of acquiring a first position, a first target area and position change information of a target part in a second screen, and acquiring a fourth position of the target part in the first screen based on the first position and the position change information;
and acquiring a second position of the target part in the first screen picture based on the third position and the fourth position.
In one possible implementation manner, the position obtaining unit is configured to perform weighted summation on the third position and the fourth position to obtain the second position of the target portion in the first screen.
In one possible implementation, the weights of the third and fourth locations are determined based on the relationship of the ratio to a first ratio threshold; or, the weights of the third and fourth locations are determined based on the ratio in relation to a second ratio threshold; or the weights of the third position and the fourth position are determined based on the relation between the current system time and the mode switching duration.
In one possible implementation, the position acquiring unit is configured to perform acquiring a sum value of the first position and the second position change information, the sum value being a second position of the target portion in the first screen.
In one possible implementation, the apparatus further includes a picture acquisition unit configured to perform any one of:
acquiring a previous screen picture of the first screen picture as the second screen picture;
acquiring a first frame of screen picture in the collected multi-frame screen pictures as the second screen picture;
and acquiring the last frame of screen picture of which the proportion is smaller than a first proportion threshold value in the collected multi-frame screen pictures as the second screen picture.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
one or more processors;
one or more memories for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to perform the operations performed by the method for acquiring the position of the face part according to any one of the first aspect and possible implementations of the first aspect.
According to a fourth aspect of the embodiments of the present disclosure, a storage medium is provided, where instructions, when executed by a processor of an electronic device, enable the electronic device to perform operations performed by the method for acquiring a position of a face part according to any one of the first aspect and possible implementations of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product including one or more instructions, which when executed by a processor of a control apparatus, enable the control apparatus to perform operations performed by the method for acquiring a position of a face part according to any one of the first aspect and possible implementations of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the embodiment of the invention, when the proportion of the face area of the face outside the screen picture to the whole face meets a certain condition, the position of the target part in the current screen picture is obtained through the position of the target part in the other screen picture and the change condition of the face in the two screen pictures, instead of directly realizing the position in a face recognition mode, the accurate position of the face part can still be obtained when the screen picture only comprises part of the face, the accuracy is good, the recognition condition does not exist, and the recognition effect is good.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flowchart illustrating a method for acquiring a position of a face part according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a method for acquiring a position of a face part according to an exemplary embodiment.
Fig. 3 is a block diagram illustrating an apparatus for acquiring a position of a face region according to an exemplary embodiment.
Fig. 4 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment.
Fig. 5 is a schematic diagram illustrating a configuration of a server according to an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a method for acquiring a position of a face part according to an exemplary embodiment, where as shown in fig. 1, the method may be applied to an electronic device, and includes the following steps:
in step S11, the proportion of the face area of the face located outside the first screen image to the face is acquired.
In step S12, when the ratio satisfies the target condition, a first position of a target portion in a second screen and a first target area including the target portion in the second screen are obtained, where the first screen and the second screen are screens acquired at different times.
In step S13, the position change information of the image feature point in the first target area and the second target area is obtained, where the second target area is the area of the first target area in the second screen that includes the target portion.
In step S14, a second position of the target portion in the first screen is acquired based on the first position and the position change information.
According to the embodiment of the invention, when the proportion of the face area of the face outside the screen picture to the whole face meets a certain condition, the position of the target part in the current screen picture is obtained through the position of the target part in the other screen picture and the change condition of the face in the two screen pictures, instead of directly realizing the position in a face recognition mode, the accurate position of the face part can still be obtained when the screen picture only comprises part of the face, the accuracy is good, the recognition condition does not exist, and the recognition effect is good.
In a possible implementation manner, the proportion of the face area of the acquired face outside the first screen to the face includes any one of:
detecting key points of the face of the first screen picture to obtain the number of the key points of the face in the first screen picture, and acquiring the ratio of the number of targets to the difference value of the number of the targets to the number of the targets;
the method comprises the steps of detecting key points of a face of a first screen picture to obtain the positions of key points of the face in the first screen picture and the positions of edge key points of the face, obtaining a first area and a second area of a first area surrounded by the key points of the face in the first screen picture according to the positions of the key points of the face in the first screen picture and the positions of the edge key points of the face, obtaining the ratio of the difference value of the second area and the first area to the second area, and obtaining the second area as the area of the second area surrounded by the edge key points.
In a possible implementation manner, the obtaining of the position change information of the image feature points in the first target region and the second target region includes any one of:
respectively extracting features of the first target area and the second target area to obtain the position of a first image feature point in the first target area and the position of a second image feature point in the second target area, and acquiring the position change information based on the positions of the first image feature point and the second image feature point;
and matching the first screen picture with the second screen picture to obtain the similarity between a first target area in the first screen picture and a second target area in the second screen picture, and acquiring the position change information of the image feature points in the first target area and the second target area based on the similarity.
In a possible implementation manner, the performing feature extraction on the first target region and the second target region respectively to obtain a position of a first image feature point in the first target region and a position of a second image feature point in the second target region includes:
respectively extracting the features of the first target area and the second target area to obtain the positions of a plurality of first image feature points in the first target area and the positions of a plurality of second image feature points in the second target area;
the obtaining the position change information based on the position of the first image feature point and the position of the second image feature point includes:
and acquiring average position change information between the plurality of first image feature points and the plurality of second image feature points according to the positions of the plurality of first image feature points in the first target area and the positions of the plurality of second image feature points in the second target area, and taking the average position change information as the position change information of the image feature points in the first target area and the second target area.
In a possible implementation manner, when the ratio satisfies a target condition, acquiring a first position of a target portion in a second screen and a first target area including the target portion in the second screen, including any one of the following:
when the acquisition mode is a first mode and the proportion is larger than a first proportion threshold value, executing the step of acquiring a first position of a target part in a second screen picture and a first target area including the target part in the second screen picture, wherein the first mode is a mode of acquiring a second position based on a face recognition mode;
and when the acquisition mode is a second mode and the ratio is greater than a second ratio threshold, executing the step of acquiring a first position of a target part in a second screen and a first target area including the target part in the second screen, wherein the second ratio threshold is smaller than the first ratio threshold, and the second mode is a mode of acquiring a second position based on position change information of the image feature point.
In one possible implementation manner, when the ratio satisfies the target condition, acquiring a first position of a target portion in a second screen and a first target area including the target portion in the second screen, further includes:
when the acquisition mode is the first mode and the ratio is greater than a first ratio threshold, switching the acquisition mode from the first mode to the second mode.
In one possible implementation, after acquiring a ratio of a face region of the human face located outside the first screen to the human face, the method further includes any one of:
when the acquisition mode is a first mode and the proportion is less than or equal to a first proportion threshold value, carrying out face recognition on the first screen picture to obtain a second position of the target part in the first screen picture, wherein the first mode is a mode for acquiring the second position based on a face recognition mode;
and when the acquisition mode is a second mode and the ratio is smaller than or equal to a second ratio threshold, performing face recognition on the first screen to obtain a second position of the target part in the first screen, wherein the second ratio threshold is smaller than the first ratio threshold, and the second mode is a mode of acquiring the second position based on the position change information of the image feature point.
In one possible implementation manner, after obtaining a ratio of a face area of the human face located outside the first screen to the human face, the method further includes:
and when the acquisition mode is a second mode and the ratio is less than or equal to a second ratio threshold, switching the acquisition mode from the second mode to the first mode.
In one possible implementation, the method further comprises:
when the acquisition mode is a first mode and the proportion is greater than a first proportion threshold value, or when the acquisition mode is a second mode and the proportion is less than or equal to a second proportion threshold value, carrying out face recognition on the first screen picture to obtain a third position of the target part in the first screen picture;
executing the step of acquiring a first position, a first target area and position change information of a target part in a second screen, and acquiring a fourth position of the target part in the first screen based on the first position and the position change information;
and acquiring a second position of the target part in the first screen picture based on the third position and the fourth position.
In one possible implementation manner, when the ratio satisfies the target condition, acquiring a first position of a target portion in a second screen and a first target area including the target portion in the second screen includes:
when the ratio is larger than a first ratio threshold, executing the step of acquiring a first position of a target part in a second screen and a first target area including the target part in the second screen;
after obtaining the proportion of the face area of the face outside the first screen picture to the face, the method further comprises:
and when the proportion is smaller than or equal to a second proportion threshold value, carrying out face recognition on the first screen picture to obtain a second position of the target part in the first screen picture.
In one possible implementation, the first proportional threshold is the same as the second proportional threshold;
when the ratio is smaller than or equal to a second ratio threshold, performing face recognition on the first screen to obtain a second position of the target portion in the first screen, including:
and when the ratio is smaller than or equal to the first ratio threshold, executing the step of carrying out face recognition on the first screen picture to obtain a second position of the target part in the first screen picture.
In one possible implementation, the first scaling threshold is greater than the second scaling threshold;
the method further comprises the following steps:
when the ratio is smaller than the first ratio threshold and larger than the second ratio threshold, executing the step of carrying out face recognition on the first screen picture to obtain a third position of the target part in the first screen picture;
executing the step of acquiring a first position, a first target area and position change information of a target part in a second screen, and acquiring a fourth position of the target part in the first screen based on the first position and the position change information;
and acquiring a second position of the target part in the first screen picture based on the third position and the fourth position.
In one possible implementation manner, the acquiring the second position of the target portion in the first screen based on the third position and the fourth position includes:
and carrying out weighted summation on the third position and the fourth position to obtain a second position of the target part in the first screen picture.
In one possible implementation, the weights of the third and fourth locations are determined based on a relationship of the ratio to a first ratio threshold; or, the weights of the third and fourth locations are determined based on the ratio in relation to a second ratio threshold; or the weights of the third position and the fourth position are determined based on the relation between the current system time and the mode switching duration.
In one possible implementation manner, the acquiring the second position of the target portion in the first screen based on the first position and the position change information includes:
and acquiring the sum of the first position and the second position change information, and taking the sum as the second position of the target part in the first screen picture.
In one possible implementation, before the acquiring the first position of the target portion in the second screen and the first target area including the target portion in the second screen, the method further includes any one of:
acquiring a previous screen picture of the first screen picture as the second screen picture;
acquiring a first frame of screen picture in the collected multi-frame screen pictures as a second screen picture;
and acquiring the last frame of screen picture of which the proportion is smaller than a first proportion threshold value in the collected multi-frame screen pictures as the second screen picture.
Fig. 2 is a flowchart illustrating a method for acquiring a position of a human face according to an exemplary embodiment, and as shown in fig. 2, the method for acquiring a position of a human face is applied to an electronic device and includes the following steps.
In step S21, the electronic device captures a first screen.
In the embodiment of the present disclosure, the electronic device may have an image capturing function, and capture an image by the image capturing function, and display the captured image on the screen, where the image displayed on the screen is a screen image. The electronic equipment can collect the screen picture and process the screen picture so as to determine the position of the human face part in the screen picture. Specifically, the electronic device acquires a first screen at a certain time, and further analyzes the first screen through the following steps to determine which way to identify the first screen, so as to determine the position of the face part.
In a possible implementation manner, the electronic device may acquire a screen in real time, and analyze the screen in real time to obtain a position of a face part that is desired to be determined. The first screen image may include the whole face or a part of the face, that is, a partial region of the face. The electronic device may adopt different processing steps according to different situations, and specifically refer to the following steps S22 to S26, which are not described herein again.
In step S22, the electronic device acquires a face area of the face outside the first screen and a ratio of the face.
After the electronic equipment acquires the first screen picture, whether the face included in the first screen picture is the whole face or a part of the face can be analyzed, and if the face is the part of the face, what the proportion of the face area of the face, which is positioned outside the first screen picture, in the whole face is, can be analyzed, so that according to the analysis result, different processing modes are adopted to process the screen picture, and the accurate position of the target part can be obtained.
The process of obtaining the ratio by the electronic device can be realized in different manners, two possible implementation manners are provided below, and the electronic device can obtain the ratio by any manner.
In the first mode, the electronic device performs face key point detection on the first screen picture to obtain the number of key points of the face in the first screen picture, and obtains the ratio of the number of targets to the difference between the number of targets and the number of targets.
In the first mode, the target number is the total number of the key points of the face, a difference between the target number and the number of the key points of the face located in the first screen image is the number of the key points of the face located outside the first screen image, and a ratio of the number of the key points of the face located outside the first screen image to the total number of the key points of the face can be used to represent a ratio between a face area of the face located outside the first screen image and the whole face.
And secondly, the electronic equipment detects key points of the face of the first screen picture to obtain the positions of the key points of the face in the first screen picture and the positions of the edge key points of the face, acquires a first area and a second area of a first area surrounded by the key points of the face in the first screen picture according to the positions of the key points of the face in the first screen picture and the positions of the edge key points of the face, and acquires the ratio of the difference value between the second area and the first area to the second area, wherein the second area is the area of a second area surrounded by the edge key points.
In the second mode, the electronic device may further directly calculate an area of a face region where the face is located outside the first screen, and use a ratio of the area of the face region where the face is located outside the first screen to the total area of the face to represent the ratio. The first area is the area of the face positioned in the first screen picture, the second area is the total area of the face, and the difference value between the second area and the first area is the area of the face positioned outside the first screen picture.
The above provides only two possible implementation manners for obtaining the ratio, and the electronic device may also calculate the ratio by other manners, which is not limited in the embodiment of the disclosure.
In step S23, when the ratio satisfies the target condition, the electronic device acquires a first position of the target portion in the second screen and a first target area including the target portion in the second screen.
After the electronic equipment acquires the proportion of the face area of the face, which is positioned outside the first screen picture, to the whole face, whether the proportion meets the target condition can be judged, and when the proportion meets the target condition, the position of the target part can be acquired in a mode different from a mode of directly using a face recognition algorithm.
In this way, the electronic device may first obtain the second screen and the first position of the target portion in the second screen, use the first position as a basis for obtaining the second position of the target portion in the first screen, and then obtain the accurate second position of the target portion in the first screen based on the change condition of the human face in the first screen and the second screen.
The first screen picture and the second screen picture are screen pictures acquired at different moments. Before the step S23, the electronic device may acquire a second screen, where the acquisition time of the second screen is different from the acquisition time of the first screen, and specifically, the acquisition process of the second screen may be implemented by any one of the following manners:
in the first mode, the electronic device acquires a previous screen of the first screen as the second screen.
And in the second mode, the electronic equipment acquires a first frame screen picture in the collected multi-frame screen pictures as the second screen picture.
And thirdly, the electronic equipment acquires the last frame of screen picture of which the proportion is smaller than the first proportion threshold value from the collected multi-frame screen pictures as the second screen picture.
It should be noted that, specifically, which time to acquire the screen image as the second screen image may be set by a related technician according to a requirement, which is not limited in the embodiment of the disclosure
In a possible implementation manner, the step S22 may be different when the target condition is different, and specifically, the target condition may be set by a person skilled in the relevant art according to requirements, which is not limited by the embodiment of the present disclosure. Two possible target conditions are provided below, and under different target conditions, this step S22 may include the following two possible implementations:
in the first mode, the target condition may include an acquisition mode, a first ratio threshold, and a second ratio threshold, and when the acquisition mode satisfies a certain condition and the ratio satisfies a certain condition, it may be determined that the ratio satisfies the target condition.
Specifically, the electronic device may acquire the second position of the target portion in the first screen in two acquisition modes, that is, a first acquisition mode and a second acquisition mode. Depending on the acquisition mode, the situation that the ratio satisfies the target condition may include the following two cases:
the first condition is as follows: and when the acquisition mode is a first mode and the proportion is larger than a first proportion threshold value, executing the step of acquiring a first position of a target part in a second screen picture and a first target area including the target part in the second screen picture, wherein the first mode is a mode of acquiring a second position based on a face recognition mode.
In the first case, when the current obtaining mode is originally a mode of directly obtaining the second position by using the face recognition mode, the ratio is greater than the first ratio threshold, which indicates that the proportion of the face area of the current face outside the first screen image in the whole face is large, that is, the proportion of the face area in the current first screen image is small, and a situation that the face recognition mode is directly used may cause a recognition failure or inaccuracy, so that another obtaining mode may be selected, that is, the second position is obtained by using the second mode. In the second mode, the electronic device may first acquire the first position of the target portion and the first target area in the second screen, and further determine the position change information of the image feature point based on the first position and the first target area to acquire the second position, which may be specifically referred to in steps S24 to S26.
In this case one, the electronic device may further perform switching of the acquisition mode, and specifically, when the acquisition mode is the first mode and the ratio is greater than the first ratio threshold, the electronic device may switch the acquisition mode from the first mode to the second mode. And then subsequently acquiring the screen picture without switching the acquisition mode again, the second position can be acquired by adopting the second mode.
Case two: and when the acquisition mode is a second mode and the ratio is greater than a second ratio threshold, executing the step of acquiring a first position of a target part in a second screen and a first target area including the target part in the second screen, wherein the second ratio threshold is smaller than the first ratio threshold, and the second mode is a mode of acquiring a second position based on position change information of the image feature point.
In case two, the second proportional threshold is a proportional threshold for switching the acquisition mode from the second mode to the first mode. It can be understood that, if the ratio is smaller than or equal to the second ratio threshold, it may be considered that the position of the target portion may be accurately acquired by using a face recognition method, and at this time, the second mode is not required to be adopted, and the acquisition mode may be switched to the first mode. Thus, in the second case, the acquiring mode is the second mode and the ratio is greater than the second ratio threshold, the electronic device may acquire the second position directly in the second mode without switching the acquiring mode, so that the electronic device may perform the step of acquiring the first position and the first target area in step S22.
In the above step S23, only the ratio of the face region with the face located outside the first screen to the face satisfies the target condition, and the ratio may not satisfy the target condition, and when the ratio does not satisfy the target condition, the electronic device may not need to execute the above step S23 and the following steps S24 to S26, and the electronic device may perform face recognition on the first screen to obtain the second position of the target portion in the first screen.
Specifically, corresponding to two cases where the above ratio satisfies the target condition, in the first embodiment, the case where the ratio does not satisfy the target condition may include the following two cases:
the first condition is as follows: when the acquisition mode is a first mode and the proportion is smaller than or equal to a first proportion threshold, the electronic equipment performs face recognition on the first screen to obtain a second position of the target part in the first screen, and the first mode is a mode for acquiring the second position based on a face recognition mode.
In the first case, the current acquisition mode is the first mode, and when the ratio is smaller than or equal to the first ratio threshold, the ratio of the face area in the first screen image is large, and the second position of the target portion can be accurately acquired by directly using a face recognition mode, so that the electronic device can directly perform face recognition on the first screen image.
Case two: and when the acquisition mode is a second mode and the proportion is smaller than or equal to a second proportion threshold value, the electronic equipment performs face recognition on the first screen picture to obtain a second position of the target part in the first screen picture, wherein the second proportion threshold value is smaller than the first proportion threshold value.
In the second case, the current acquisition mode is the second mode, and the ratio is less than or equal to the second ratio threshold, the second position of the target portion can be accurately acquired by directly using the face recognition mode, and the electronic device can acquire the second position by directly using the face recognition mode.
In a possible implementation manner, in case two, the current acquisition mode and the ratio satisfy the condition of switching the acquisition mode, and the electronic device may further perform switching of the acquisition mode. Specifically, when the acquisition mode is the second mode and the ratio is less than or equal to a second ratio threshold, the electronic device may switch the acquisition mode from the second mode to the first mode.
In the first embodiment, the first ratio threshold and the second ratio threshold are ratio thresholds for switching the acquisition mode, and when the acquisition mode is the first mode, the acquisition mode may be switched to the second mode if the ratio is greater than the first ratio threshold, and when the acquisition mode is the second mode, the acquisition mode may be switched to the first mode if the ratio is less than or equal to the second ratio threshold. It should be noted that the first ratio threshold and the second ratio threshold may be set by a person skilled in the relevant art as required, for example, the first ratio threshold is 0.7, and the second ratio threshold is 0.5, which is not limited in this disclosure.
For example, in a specific example, taking the first ratio threshold as 0.7 and the second ratio threshold as 0.5 as an example, the electronic device acquires a plurality of screen images, and the variation trend of the ratio of the plurality of screen images is as follows: from 0.4 to 0.8, from 0.8 to 0.4. Through the setting of the target conditions, in the process that the proportion is changed from 0.4 to 0.7, the electronic equipment can adopt the first mode to directly perform face recognition to obtain the second position. In the process of changing the ratio from 0.7 to 0.8, the electronic device may adopt the second mode, and need to obtain the second position with the assistance of the data of the second screen and the position change information of the image feature point. The electronic device may adopt the second mode during the change of the ratio from 0.8 to 0.5, and the electronic device may adopt the first mode during the change of the ratio from 0.5 to 0.4.
In a possible implementation manner, in the first implementation manner, a smooth transition manner may be further set, and in the switching process of the two acquisition modes, one acquisition mode may be smoothly switched to another acquisition mode, so that the smoothness of the acquired second position is improved, a phenomenon of jump does not occur, and the recognition effect is better. Specifically, the smoothing process may include the following three steps:
step one, when the acquisition mode is a first mode and the proportion is larger than a first proportion threshold value, or when the acquisition mode is a second mode and the proportion is smaller than or equal to a second proportion threshold value, the electronic equipment carries out face recognition on the first screen picture to obtain a third position of the target part in the first screen picture.
And step two, the electronic equipment executes the step of acquiring the first position, the first target area and the position change information of the target part in the second screen picture, and acquires the fourth position of the target part in the first screen picture based on the first position and the position change information.
And thirdly, the electronic equipment acquires a second position of the target part in the first screen picture based on the third position and the fourth position.
In the third step, the electronic device may combine results obtained by the two processing manners in a weighted summation manner, and specifically, the electronic device may perform weighted summation on the third position and the fourth position to obtain the second position of the target portion in the first screen.
The weights of the third position and the fourth position may be set by a related technician as required, or the determination manner of the weights may be set by the related technician.
In a possible implementation manner, the weights of the third position and the fourth position may be determined based on a relationship between the ratio and a first ratio threshold, for example, a first difference between the first ratio threshold and the ratio is used as the weight of the third position, and a second difference obtained by 1 and the difference is used as the weight of the fourth position.
In another possible implementation, the weights of the third position and the fourth position may also be determined based on the relationship of the ratio to a second ratio threshold. The setting method is the same as the previous implementation method, and will not be described in detail herein.
In yet another possible implementation, the weights of the third location and the fourth location are determined based on a relationship between a current system time and a mode switching duration. For example, the switching time duration may be set to T, and when the current system time is T, T-T may be used as the weight of the third position, and T may be used as the weight of the fourth position.
The above description has been made on one target condition setting by way of one, and another target condition setting is described below. In the second mode, the first ratio threshold and the second ratio threshold are directly set for the ratio, and which processing mode is adopted is determined according to the relationship between the ratio and the two ratio thresholds, and the two processing modes are the same as the two acquisition modes.
In the second embodiment, the step S23 may be: and when the ratio is larger than a first ratio threshold, executing the step of acquiring a first position of a target part in a second screen and a first target area including the target part in the second screen. When the ratio is larger than the first ratio threshold, it is indicated that the face area outside the screen image is large in ratio, and the assistance of the data of the second screen image and the position change information of the image feature point is required. In the second embodiment, there is a case where the ratio does not satisfy the target condition, and the case may be: and when the proportion is smaller than or equal to a second proportion threshold value, the electronic equipment performs face recognition on the first screen picture to obtain a second position of the target part in the first screen picture.
It should be noted that the first proportional threshold and the second proportional threshold may be set by a person skilled in the relevant art as required, and the first proportional threshold may be greater than or equal to the second proportional threshold. The processing procedure may be different when the magnitude relationship between the first proportional threshold and the second proportional threshold is different.
In one possible implementation, the first scaling threshold may be the same as the second scaling threshold. In this case, the case where the ratio does not satisfy the target condition may be: and when the ratio is smaller than or equal to the first ratio threshold, the electronic equipment executes the step of carrying out face recognition on the first screen picture to obtain a second position of the target part in the first screen picture.
In another possible implementation, the first scaling threshold may be greater than the second scaling threshold. In this implementation, the ratio may be smaller than the first ratio threshold and larger than the second ratio threshold, in which case: and simultaneously adopting two processing modes, and then integrating the results of the two processing modes to obtain a second position. Specifically, the following three steps may be included in this case:
step one, when the proportion is smaller than the first proportion threshold value and larger than the second proportion threshold value, the electronic equipment executes the step of carrying out face recognition on the first screen picture to obtain a third position of the target part in the first screen picture.
And step two, the electronic equipment executes the step of acquiring the first position, the first target area and the position change information of the target part in the second screen picture, and acquires the fourth position of the target part in the first screen picture based on the first position and the position change information.
And thirdly, the electronic equipment acquires a second position of the target part in the first screen picture based on the third position and the fourth position.
The first to third steps are similar to the smooth transition method in the first method, except that the set proportion needs to satisfy different conditions. It should be noted that, by integrating the two processing modes, an intermediate switching process can be set in the switching process of the two processing modes, so that the two processing modes can be smoothly transited by means of the intermediate switching process, the smoothness of the obtained second position is better, the phenomenon of jumping cannot occur, and the recognition effect is better.
In step S24, the electronic device performs feature extraction on the first target region and the second target region respectively to obtain a position of a first image feature point in the first target region and a position of a second image feature point in the second target region.
The second target area is an area of the first target area corresponding to the second screen image and including the target part.
The electronic equipment can track the first target area after acquiring the first target area to obtain a second target area, determine the change condition of the human face when the second screen image changes to the first screen image according to the change of the image feature points in the first target area and the second target area, and further calculate the second position of the target part in the first screen image according to the first position of the target part in the second screen image.
In one possible implementation manner, the number of the first image feature points and the second image feature points may be multiple, and then the step S24 may be: the electronic equipment respectively extracts the features of the first target area and the second target area to obtain the positions of a plurality of first image feature points in the first target area and the positions of a plurality of second image feature points in the second target area.
In step S25, the electronic device acquires the position change information based on the position of the first image feature point and the position of the second image feature point.
After obtaining the position of the first image feature point and the position of the second image feature point, the electronic device may integrate the two positions to determine the change condition of the human face when the second screen changes to the first screen, that is, the position change information.
In one possible implementation, a difference between the position of the second image feature point and the position of the first image feature point may be used as the position change information.
In one possible implementation manner, the number of the first image feature points and the second image feature points may be multiple, and then the step S25 may be: the electronic device may obtain average position change information between the plurality of first image feature points and the plurality of second image feature points according to positions of the plurality of first image feature points in the first target region and positions of the plurality of second image feature points in the second target region, and use the average position change information as position change information of the image feature points in the first target region and the second target region.
It should be noted that, the above steps S24 and S25 are processes for acquiring the position change information of the image feature points in the first target area and the second target area, and the above steps S24 and S25 only show one implementation manner, and the processes may also be implemented in other manners, for example, the electronic device may match the first screen and the second screen to obtain the similarity between the first target area in the first screen and the second target area in the second screen, and acquire the position change information of the image feature points in the first target area and the second target area based on the similarity.
In step S26, the electronic device obtains a second position of the target portion in the first screen based on the first position and the position change information.
Specifically, the electronic device may obtain a sum of the first position and the second position change information, and use the sum as the second position of the target portion in the first screen. The electronic device uses the position of the target part in the second screen as an original position (first position), and after the change condition of the human face in the two screens is obtained through analysis, the second position can be obtained on the basis of the original position.
For example, the second position obtaining process may be implemented by the following formula:
Pnew=Porigin+1/m*(P1’+P2’+…+Pm’–(P1+P2+…+Pm))
pnew is the second position, Porigin is the first position, P1, P2 … Pm are the positions of the first image feature points, P1 ', P2 ' … Pm ' are the positions of the second image feature points.
In a possible implementation manner, after the step S26, the electronic device may further obtain a target position of the virtual item according to the second position of the target portion in the first screen. For example, the virtual object may be a virtual decoration, and after the position of the face part is determined, the position may be converted into the position of the virtual decoration, and an image of the virtual decoration may be added.
In the related art, the proportion is not considered, the face recognition is directly carried out on the screen picture through the face recognition algorithm under any condition, the problems of recognition failure, inaccuracy and the like can occur under the condition that the screen picture only comprises a part of faces, the method provides different processing modes, and under the condition that the screen picture only comprises a part of faces, the second position is accurately obtained by analyzing the change condition of the faces and carrying out position change based on the accurate original position.
According to the embodiment of the invention, when the proportion of the face area of the face outside the screen picture to the whole face meets a certain condition, the position of the target part in the current screen picture is obtained through the position of the target part in the other screen picture and the change condition of the face in the two screen pictures, instead of directly realizing the position in a face recognition mode, the accurate position of the face part can still be obtained when the screen picture only comprises part of the face, the accuracy is good, the recognition condition does not exist, and the recognition effect is good.
Fig. 3 is a block diagram illustrating an apparatus for acquiring a position of a face region according to an exemplary embodiment. Referring to fig. 3, the apparatus includes:
a scale acquisition unit 301 configured to perform acquisition of a scale of a face area of a face located outside the first screen and the face;
a position obtaining unit 302, configured to obtain a first position of a target portion in a second screen and a first target area including the target portion in the second screen when the ratio satisfies a target condition, where the first screen and the second screen are screens acquired at different times;
an information obtaining unit 303 configured to perform obtaining of position change information of the image feature point in the first target region and a second target region, the second target region being a region of the first target region corresponding to the second screen and including the target portion;
the position obtaining unit 302 is further configured to perform obtaining a second position of the target portion in the first screen based on the first position and the position change information.
In one possible implementation, the proportion obtaining unit 301 is configured to perform any one of the following:
detecting key points of the face of the first screen picture to obtain the number of the key points of the face in the first screen picture, and acquiring the ratio of the number of targets to the difference value of the number of the targets to the number of the targets;
the method comprises the steps of detecting key points of a face of a first screen picture to obtain the positions of key points of the face in the first screen picture and the positions of edge key points of the face, obtaining a first area and a second area of a first area surrounded by the key points of the face in the first screen picture according to the positions of the key points of the face in the first screen picture and the positions of the edge key points of the face, obtaining the ratio of the difference value of the second area and the first area to the second area, and obtaining the second area as the area of the second area surrounded by the edge key points.
In one possible implementation, the information obtaining unit 303 is configured to perform any one of:
respectively extracting features of the first target area and the second target area to obtain the position of a first image feature point in the first target area and the position of a second image feature point in the second target area, and acquiring the position change information based on the positions of the first image feature point and the second image feature point;
and matching the first screen picture with the second screen picture to obtain the similarity between a first target area in the first screen picture and a second target area in the second screen picture, and acquiring the position change information of the image feature points in the first target area and the second target area based on the similarity.
In one possible implementation manner, the information obtaining unit 303 is configured to perform feature extraction on the first target area and the second target area respectively, so as to obtain positions of a plurality of first image feature points in the first target area and positions of a plurality of second image feature points in the second target area;
the information acquiring unit 303 is configured to perform acquiring average position change information between a plurality of first image feature points and a plurality of second image feature points in the first target region as position change information of the image feature points in the first target region and the second target region, based on positions of the plurality of first image feature points in the first target region and positions of the plurality of second image feature points in the second target region.
In one possible implementation, the location acquisition unit 302 is configured to perform any of the following:
when the acquisition mode is a first mode and the proportion is larger than a first proportion threshold value, executing the step of acquiring a first position of a target part in a second screen picture and a first target area including the target part in the second screen picture, wherein the first mode is a mode of acquiring a second position based on a face recognition mode;
and when the acquisition mode is a second mode and the ratio is greater than a second ratio threshold, executing the step of acquiring a first position of a target part in a second screen and a first target area including the target part in the second screen, wherein the second ratio threshold is smaller than the first ratio threshold, and the second mode is a mode of acquiring a second position based on position change information of the image feature point.
In one possible implementation, the position obtaining unit 302 is further configured to perform switching the obtaining mode from the first mode to the second mode when the obtaining mode is the first mode and the ratio is greater than a first ratio threshold.
In one possible implementation, the location obtaining unit 302 is further configured to perform any one of:
when the acquisition mode is a first mode and the proportion is less than or equal to a first proportion threshold value, carrying out face recognition on the first screen picture to obtain a second position of the target part in the first screen picture, wherein the first mode is a mode for acquiring the second position based on a face recognition mode;
and when the acquisition mode is a second mode and the ratio is smaller than or equal to a second ratio threshold, performing face recognition on the first screen to obtain a second position of the target part in the first screen, wherein the second ratio threshold is smaller than the first ratio threshold, and the second mode is a mode of acquiring the second position based on the position change information of the image feature point.
In one possible implementation, the apparatus further includes:
a first mode switching unit configured to perform switching of the acquisition mode from the second mode to the first mode when the acquisition mode is the second mode and the ratio is less than or equal to a second ratio threshold.
In one possible implementation, the position obtaining unit 302 is further configured to perform:
when the acquisition mode is a first mode and the proportion is greater than a first proportion threshold value, or when the acquisition mode is a second mode and the proportion is less than or equal to a second proportion threshold value, carrying out face recognition on the first screen picture to obtain a third position of the target part in the first screen picture;
executing the step of acquiring a first position, a first target area and position change information of a target part in a second screen, and acquiring a fourth position of the target part in the first screen based on the first position and the position change information;
and acquiring a second position of the target part in the first screen picture based on the third position and the fourth position.
In one possible implementation, the position obtaining unit 302 is configured to perform the steps of obtaining a first position of a target portion in a second screen and a first target area including the target portion in the second screen when the ratio is greater than a first ratio threshold;
the position obtaining unit 302 is further configured to perform face recognition on the first screen when the ratio is smaller than or equal to a second ratio threshold, so as to obtain a second position of the target portion in the first screen.
In one possible implementation, the first proportional threshold is the same as the second proportional threshold;
the position obtaining unit 302 is configured to perform the step of performing face recognition on the first screen to obtain a second position of the target portion in the first screen when the ratio is smaller than or equal to the first ratio threshold.
In one possible implementation, the first scaling threshold is greater than the second scaling threshold;
the position acquisition unit 302 is configured to perform:
when the ratio is smaller than the first ratio threshold and larger than the second ratio threshold, executing the step of carrying out face recognition on the first screen picture to obtain a third position of the target part in the first screen picture;
executing the step of acquiring a first position, a first target area and position change information of a target part in a second screen, and acquiring a fourth position of the target part in the first screen based on the first position and the position change information;
and acquiring a second position of the target part in the first screen picture based on the third position and the fourth position.
In one possible implementation manner, the position obtaining unit 302 is configured to perform weighted summation on the third position and the fourth position to obtain the second position of the target portion in the first screen.
In one possible implementation, the weights of the third and fourth locations are determined based on a relationship of the ratio to a first ratio threshold; or, the weights of the third and fourth locations are determined based on the ratio in relation to a second ratio threshold; or the weights of the third position and the fourth position are determined based on the relation between the current system time and the mode switching duration.
In one possible implementation, the position obtaining unit 302 is configured to perform obtaining a sum of the first position and the second position change information, and the sum is used as the second position of the target portion in the first screen.
In one possible implementation, the apparatus further includes a picture acquisition unit configured to perform any one of:
acquiring a previous screen picture of the first screen picture as the second screen picture;
acquiring a first frame of screen picture in the collected multi-frame screen pictures as a second screen picture;
and acquiring the last frame of screen picture of which the proportion is smaller than a first proportion threshold value in the collected multi-frame screen pictures as the second screen picture.
According to the embodiment of the invention, when the proportion of the face area of the face outside the screen picture to the whole face meets a certain condition, the position of the target part in the current screen picture is obtained through the position of the target part in the other screen picture and the change condition of the face in the two screen pictures, instead of directly realizing the position in a face recognition mode, the accurate position of the face part can still be obtained when the screen picture only comprises part of the face, the accuracy is good, the recognition condition does not exist, and the recognition effect is good.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The electronic device may be a terminal shown in fig. 4 described below, or may be a server shown in fig. 5 described below, which is not limited in this disclosure.
Fig. 4 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment. The terminal 400 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 400 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, the terminal 400 includes: a processor 401 and a memory 402.
Processor 401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 401 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 401 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 402 is used to store at least one instruction for execution by the processor 401 to implement the method for location acquisition of a face region provided by the method embodiments of the present disclosure.
In some embodiments, the terminal 400 may further optionally include: a peripheral interface 403 and at least one peripheral. The processor 401, memory 402 and peripheral interface 403 may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface 403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 404, touch screen display 405, camera 406, audio circuitry 407, positioning components 408, and power supply 409.
The peripheral interface 403 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 401 and the memory 402. In some embodiments, processor 401, memory 402, and peripheral interface 403 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 401, the memory 402 and the peripheral interface 403 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 404 may also include NFC (Near Field Communication) related circuits, which are not limited by this disclosure.
The display screen 405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 405 is a touch display screen, the display screen 405 also has the ability to capture touch signals on or over the surface of the display screen 405. The touch signal may be input to the processor 401 as a control signal for processing. At this point, the display screen 405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 405 may be one, providing the front panel of the terminal 400; in other embodiments, the display screen 405 may be at least two, respectively disposed on different surfaces of the terminal 400 or in a folded design; in still other embodiments, the display 405 may be a flexible display disposed on a curved surface or a folded surface of the terminal 400. Even further, the display screen 405 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display screen 405 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 406 is used to capture images or video. Optionally, camera assembly 406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 401 for processing, or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 401 or the radio frequency circuit 404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 407 may also include a headphone jack.
The positioning component 408 is used to locate the current geographic position of the terminal 400 for navigation or LBS (Location Based Service). The Positioning component 408 may be a Positioning component based on the GPS (Global Positioning System) of the united states, the beidou System of china, the graves System of russia, or the galileo System of the european union.
The power supply 409 is used to supply power to the various components in the terminal 400. The power source 409 may be alternating current, direct current, disposable or rechargeable. When power source 409 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 400 also includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 400. For example, the acceleration sensor 411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 401 may control the touch display screen 405 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 411. The acceleration sensor 411 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the terminal 400, and the gyro sensor 412 may cooperate with the acceleration sensor 411 to acquire a 3D motion of the terminal 400 by the user. From the data collected by the gyro sensor 412, the processor 401 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 413 may be disposed on a side bezel of the terminal 400 and/or a lower layer of the touch display screen 405. When the pressure sensor 413 is disposed on the side frame of the terminal 400, a user's holding signal to the terminal 400 can be detected, and the processor 401 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 413. When the pressure sensor 413 is disposed at the lower layer of the touch display screen 405, the processor 401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 414 is used for collecting a fingerprint of the user, and the processor 401 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 414 may be disposed on the front, back, or side of the terminal 400. When a physical key or vendor Logo is provided on the terminal 400, the fingerprint sensor 414 may be integrated with the physical key or vendor Logo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, the processor 401 may control the display brightness of the touch display screen 405 based on the ambient light intensity collected by the optical sensor 415. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 405 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 405 is turned down. In another embodiment, the processor 401 may also dynamically adjust the shooting parameters of the camera assembly 406 according to the ambient light intensity collected by the optical sensor 415.
A proximity sensor 416, also known as a distance sensor, is typically disposed on the front panel of the terminal 400. The proximity sensor 416 is used to collect the distance between the user and the front surface of the terminal 400. In one embodiment, when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually decreases, the processor 401 controls the touch display screen 405 to switch from the bright screen state to the dark screen state; when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually becomes larger, the processor 401 controls the touch display screen 405 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 4 is not intended to be limiting of terminal 400 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 5 is a schematic structural diagram of a server according to an exemplary embodiment, where the server 500 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 501 and one or more memories 502, where the memory 502 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 501 to implement the position obtaining method for a face part provided by the above-mentioned method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, a storage medium including instructions, for example, a memory including instructions, which are executable by a processor of an apparatus to perform the above-mentioned position acquisition method of a face part is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, which includes one or more instructions executable by a processor of an electronic device to perform the method steps of the method for acquiring the position of a human face part provided in the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (32)

1. A method for acquiring the position of a human face part is characterized by comprising the following steps:
acquiring the proportion of a face area of a face outside a first screen picture to the face;
when the acquisition mode is a first mode and the proportion is larger than a first proportion threshold value, or when the acquisition mode is a second mode and the proportion is smaller than or equal to a second proportion threshold value, carrying out face recognition on the first screen to obtain a third position of the target part in the first screen, wherein the first mode is a mode for acquiring a second position based on a face recognition mode, and the second mode is a mode for acquiring the second position based on position change information of image feature points;
acquiring a first position of a target part in a second screen picture and a first target area including the target part in the second screen picture, wherein the first screen picture and the second screen picture are screen pictures acquired at different moments;
acquiring position change information of image feature points in the first target area and a second target area, wherein the second target area is an area, corresponding to the first target area, in the first screen picture and including the target part;
acquiring a fourth position of the target part in the first screen picture based on the first position and the position change information;
and acquiring a second position of the target part in the first screen picture based on the third position and the fourth position.
2. The method for acquiring the position of the human face part according to claim 1, wherein the ratio of the face area of the acquired human face outside the first screen to the human face comprises any one of:
performing face key point detection on the first screen picture to obtain the number of key points of the face in the first screen picture, and acquiring the ratio of the target number to the difference value of the number to the target number;
the method comprises the steps of detecting key points of a face of a first screen picture to obtain the positions of key points of the face in the first screen picture and the positions of edge key points of the face, obtaining a first area and a second area of a first area defined by the key points of the face in the first screen picture according to the positions of the key points of the face in the first screen picture and the positions of the edge key points of the face, obtaining the ratio of the difference value of the second area and the first area to the second area, and obtaining the second area as the area of the second area defined by the edge key points.
3. The method for acquiring the position of the human face part according to claim 1, wherein the acquiring of the position change information of the image feature points in the first target region and the second target region includes any one of:
respectively extracting features of the first target area and the second target area to obtain the position of a first image feature point in the first target area and the position of a second image feature point in the second target area, and acquiring the position change information based on the positions of the first image feature point and the second image feature point;
and matching the first screen picture with the second screen picture to obtain the similarity of a first target area in the first screen picture and a second target area in the second screen picture, and acquiring the position change information of the image feature points in the first target area and the second target area based on the similarity.
4. The method for obtaining the position of the human face according to claim 3, wherein the performing feature extraction on the first target region and the second target region respectively to obtain the position of the first image feature point in the first target region and the position of the second image feature point in the second target region comprises:
respectively extracting features of the first target area and the second target area to obtain the positions of a plurality of first image feature points in the first target area and the positions of a plurality of second image feature points in the second target area;
the acquiring the position change information based on the position of the first image feature point and the position of the second image feature point includes:
and acquiring average position change information between the plurality of first image feature points and the plurality of second image feature points according to the positions of the plurality of first image feature points in the first target area and the positions of the plurality of second image feature points in the second target area, and taking the average position change information as the position change information of the image feature points in the first target area and the second target area.
5. The method for acquiring the position of the human face part according to claim 1, wherein the acquiring the proportion of the human face to the face area outside the first screen image further comprises:
when the acquisition mode is a second mode and the ratio is larger than a second ratio threshold, executing the steps of acquiring a first position of a target part in a second screen and a first target area including the target part in the second screen, and acquiring position change information of image feature points in the first target area and the second target area, and acquiring a second position of the target part in the first screen based on the first position and the position change information.
6. The method for acquiring the position of the human face part according to claim 1, wherein the acquiring the proportion of the human face to the face area outside the first screen image further comprises:
when an acquisition mode is a first mode and the ratio is greater than a first ratio threshold, switching the acquisition mode from the first mode to the second mode.
7. The method for acquiring the position of the human face part according to claim 1, wherein the acquiring the proportion of the human face to the face area outside the first screen image further comprises:
and when the acquisition mode is a first mode and the ratio is less than or equal to a first ratio threshold, carrying out face recognition on the first screen picture to obtain a second position of the target part in the first screen picture.
8. The method for acquiring the position of the human face part according to claim 1, wherein the acquiring the proportion of the human face to the face area outside the first screen image further comprises:
when the acquisition mode is a second mode and the ratio is less than or equal to a second ratio threshold, switching the acquisition mode from the second mode to the first mode.
9. The method for acquiring the position of the human face part according to claim 1, wherein the method for acquiring the proportion of the human face to the face area of the human face outside the first screen further comprises any one of the following steps:
when the ratio is larger than a first ratio threshold, executing the steps of acquiring a first position of a target part in a second screen and a first target area including the target part in the second screen, and acquiring position change information of image feature points in the first target area and the second target area, and acquiring a second position of the target part in the first screen based on the first position and the position change information;
and when the proportion is smaller than or equal to a second proportion threshold value, carrying out face recognition on the first screen picture to obtain a second position of the target part in the first screen picture.
10. The method for acquiring the position of the human face according to claim 9, wherein the first proportional threshold is the same as the second proportional threshold;
when the ratio is smaller than or equal to a second ratio threshold, performing face recognition on the first screen to obtain a second position of the target part in the first screen, including:
and when the ratio is smaller than or equal to the first ratio threshold, executing the step of carrying out face recognition on the first screen picture to obtain a second position of the target part in the first screen picture.
11. The method for acquiring the position of the human face according to claim 1, wherein the first proportional threshold is greater than the second proportional threshold;
before the face recognition is performed on the first screen image to obtain the third position of the target portion in the first screen image, the method further includes:
and when the ratio is smaller than the first ratio threshold and larger than the second ratio threshold, executing the step of carrying out face recognition on the first screen picture to obtain a third position of the target part in the first screen picture.
12. The method for obtaining the position of the face part according to claim 1, wherein the obtaining the second position of the target part in the first screen based on the third position and the fourth position comprises:
and carrying out weighted summation on the third position and the fourth position to obtain a second position of the target part in the first screen picture.
13. The method according to claim 12, wherein the weights of the third position and the fourth position are determined based on a relationship between the ratio and a first ratio threshold; or, the weights of the third and fourth locations are determined based on the ratio in relation to a second ratio threshold; or the weights of the third position and the fourth position are determined based on the relation between the current system time and the mode switching duration.
14. The method for obtaining the position of the face part according to claim 1, wherein the obtaining the fourth position of the target part in the first screen based on the first position and the position change information comprises:
and acquiring a sum of the first position change information and the second position change information, and taking the sum as a fourth position of the target part in the first screen picture.
15. The method for acquiring the position of the face part according to claim 1, wherein before the acquiring the first position of the target part in the second screen and the first target area including the target part in the second screen, the method further comprises any one of the following steps:
acquiring a previous screen picture of the first screen picture as the second screen picture;
acquiring a first frame of screen picture in the collected multi-frame screen pictures as the second screen picture;
and acquiring the last frame of screen picture of which the proportion is smaller than a first proportion threshold value in the collected multi-frame screen pictures as the second screen picture.
16. A position acquisition device for a human face part is characterized by comprising:
a scale acquisition unit configured to perform acquisition of a scale of a face area of a face located outside a first screen to the face;
a position obtaining unit configured to perform face recognition on the first screen to obtain a third position of the target portion in the first screen when the obtaining mode is a first mode and the ratio is greater than a first ratio threshold or when the obtaining mode is a second mode and the ratio is less than or equal to a second ratio threshold, wherein the first mode is a mode of obtaining a second position based on a face recognition mode, and the second mode is a mode of obtaining a second position based on position change information of the image feature point;
the position acquisition unit is further configured to perform acquisition of a first position of a target portion in a second screen and a first target area including the target portion in the second screen, where the first screen and the second screen are acquired at different times;
an information acquisition unit configured to perform acquisition of position change information of image feature points in the first target region and a second target region, the second target region being a region of the first target region corresponding to the first screen and including the target portion;
the position acquiring unit is further configured to execute acquiring a fourth position of the target part in the first screen based on the first position and the position change information;
the position acquisition unit is further configured to perform acquisition of a second position of the target portion in the first screen based on the third position and the fourth position.
17. The apparatus according to claim 16, wherein the scale acquiring unit is configured to perform any one of:
performing face key point detection on the first screen picture to obtain the number of key points of the face in the first screen picture, and acquiring the ratio of the target number to the difference value of the number to the target number;
the method comprises the steps of detecting key points of a face of a first screen picture to obtain the positions of key points of the face in the first screen picture and the positions of edge key points of the face, obtaining a first area and a second area of a first area defined by the key points of the face in the first screen picture according to the positions of the key points of the face in the first screen picture and the positions of the edge key points of the face, obtaining the ratio of the difference value of the second area and the first area to the second area, and obtaining the second area as the area of the second area defined by the edge key points.
18. The apparatus according to claim 16, wherein the information acquisition unit is configured to perform any one of:
respectively extracting features of the first target area and the second target area to obtain the position of a first image feature point in the first target area and the position of a second image feature point in the second target area, and acquiring the position change information based on the positions of the first image feature point and the second image feature point;
and matching the first screen picture with the second screen picture to obtain the similarity of a first target area in the first screen picture and a second target area in the second screen picture, and acquiring the position change information of the image feature points in the first target area and the second target area based on the similarity.
19. The apparatus according to claim 18, wherein the information acquiring unit is configured to perform feature extraction on the first target region and the second target region, respectively, to obtain positions of a plurality of first image feature points in the first target region and positions of a plurality of second image feature points in the second target region;
the information acquisition unit is configured to perform acquiring average position change information between a plurality of first image feature points and a plurality of second image feature points in the first target region as position change information of the image feature points in the first target region and the second target region, according to positions of the plurality of first image feature points in the first target region and positions of the plurality of second image feature points in the second target region.
20. The device for acquiring the position of the human face part according to claim 16, wherein the position acquiring unit is configured to perform:
when the acquisition mode is a second mode and the ratio is larger than a second ratio threshold, executing the steps of acquiring a first position of a target part in a second screen and a first target area including the target part in the second screen, and acquiring position change information of image feature points in the first target area and the second target area, and acquiring a second position of the target part in the first screen based on the first position and the position change information.
21. The apparatus according to claim 16, wherein the position acquiring unit is further configured to switch the acquisition mode from the first mode to the second mode when the acquisition mode is the first mode and the ratio is greater than a first ratio threshold.
22. The device for acquiring the position of the human face part according to claim 16, wherein the position acquiring unit is further configured to perform:
and when the acquisition mode is a first mode and the ratio is less than or equal to a first ratio threshold, carrying out face recognition on the first screen picture to obtain a second position of the target part in the first screen picture.
23. The device for acquiring the position of the human face part according to claim 16, characterized in that the device further comprises:
a first mode switching unit configured to perform switching of an acquisition mode from a second mode to the first mode when the acquisition mode is the second mode and the ratio is less than or equal to a second ratio threshold.
24. The apparatus according to claim 16, wherein the position acquiring unit is configured to perform the steps of acquiring a first position of a target portion in the second screen and a first target area including the target portion in the second screen, acquiring position change information of image feature points in the first target area and the second target area, and acquiring a second position of the target portion in the first screen based on the first position and the position change information, when the ratio is larger than a first ratio threshold;
the position acquisition unit is further configured to perform face recognition on the first screen image when the ratio is smaller than or equal to a second ratio threshold value, so as to obtain a second position of the target part in the first screen image.
25. The device for acquiring the position of the human face according to claim 24, wherein the first proportional threshold is the same as the second proportional threshold;
the position acquisition unit is configured to execute the step of performing face recognition on the first screen picture to obtain a second position of the target part in the first screen picture when the ratio is smaller than or equal to the first ratio threshold.
26. The device for acquiring the position of the human face according to claim 16, wherein the first proportional threshold is greater than the second proportional threshold;
the position acquisition unit is configured to perform:
and when the ratio is smaller than the first ratio threshold and larger than the second ratio threshold, executing the step of carrying out face recognition on the first screen picture to obtain a third position of the target part in the first screen picture.
27. The apparatus according to claim 16, wherein the position acquiring unit is configured to perform weighted summation of the third position and the fourth position to obtain the second position of the target portion in the first screen.
28. The apparatus for obtaining the position of the human face according to claim 27, wherein the weights of the third position and the fourth position are determined based on a relationship between the ratio and a first ratio threshold; or, the weights of the third and fourth locations are determined based on the ratio in relation to a second ratio threshold; or the weights of the third position and the fourth position are determined based on the relation between the current system time and the mode switching duration.
29. The apparatus according to claim 16, wherein the position acquisition unit is configured to perform acquisition of a sum of the first position and the second position change information as a fourth position of the target portion in the first screen.
30. The apparatus according to claim 16, further comprising a picture acquiring unit configured to perform any one of:
acquiring a previous screen picture of the first screen picture as the second screen picture;
acquiring a first frame of screen picture in the collected multi-frame screen pictures as the second screen picture;
and acquiring the last frame of screen picture of which the proportion is smaller than a first proportion threshold value in the collected multi-frame screen pictures as the second screen picture.
31. An electronic device, comprising:
one or more processors;
one or more memories for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to execute the instructions to implement the method for acquiring the position of the human face part according to any one of claims 1 to 15.
32. A storage medium characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to execute the method for acquiring a position of a human face part according to any one of claims 1 to 15.
CN201910517973.5A 2019-06-14 2019-06-14 Face position acquisition method and device, electronic equipment and storage medium Active CN110263695B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910517973.5A CN110263695B (en) 2019-06-14 2019-06-14 Face position acquisition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910517973.5A CN110263695B (en) 2019-06-14 2019-06-14 Face position acquisition method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110263695A CN110263695A (en) 2019-09-20
CN110263695B true CN110263695B (en) 2021-07-16

Family

ID=67918457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910517973.5A Active CN110263695B (en) 2019-06-14 2019-06-14 Face position acquisition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110263695B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139919A (en) * 2021-05-08 2021-07-20 广州繁星互娱信息科技有限公司 Special effect display method and device, computer equipment and storage medium
CN117953567A (en) * 2024-01-26 2024-04-30 广州宏途数字科技有限公司 Multi-mode face recognition method and system for school entrance guard attendance

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101440676A (en) * 2008-12-22 2009-05-27 北京中星微电子有限公司 Intelligent anti-theft door lock based on cam and warning processing method thereof
CN102129695A (en) * 2010-01-19 2011-07-20 中国科学院自动化研究所 Target tracking method based on modeling of occluder under condition of having occlusion
CN104318211A (en) * 2014-10-17 2015-01-28 中国传媒大学 Anti-shielding face tracking method
CN105678288A (en) * 2016-03-04 2016-06-15 北京邮电大学 Target tracking method and device
CN105913028A (en) * 2016-04-13 2016-08-31 华南师范大学 Face tracking method and face tracking device based on face++ platform
CN106485215A (en) * 2016-09-29 2017-03-08 西交利物浦大学 Face occlusion detection method based on depth convolutional neural networks
CN107451453A (en) * 2017-07-28 2017-12-08 广东欧珀移动通信有限公司 Solve lock control method and Related product
CN108197604A (en) * 2018-01-31 2018-06-22 上海敏识网络科技有限公司 Fast face positioning and tracing method based on embedded device
CN108319953A (en) * 2017-07-27 2018-07-24 腾讯科技(深圳)有限公司 Occlusion detection method and device, electronic equipment and the storage medium of target object
CN108492315A (en) * 2018-02-09 2018-09-04 湖南华诺星空电子技术有限公司 A kind of dynamic human face tracking
CN108875534A (en) * 2018-02-05 2018-11-23 北京旷视科技有限公司 Method, apparatus, system and the computer storage medium of recognition of face
CN108898087A (en) * 2018-06-22 2018-11-27 腾讯科技(深圳)有限公司 Training method, device, equipment and the storage medium of face key point location model
CN108932456A (en) * 2017-05-23 2018-12-04 北京旷视科技有限公司 Face identification method, device and system and storage medium
CN109299658A (en) * 2018-08-21 2019-02-01 腾讯科技(深圳)有限公司 Face area detecting method, face image rendering method, device and storage medium
CN109635752A (en) * 2018-12-12 2019-04-16 腾讯科技(深圳)有限公司 Localization method, face image processing process and the relevant apparatus of face key point

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9600711B2 (en) * 2012-08-29 2017-03-21 Conduent Business Services, Llc Method and system for automatically recognizing facial expressions via algorithmic periocular localization
CN105678213B (en) * 2015-12-20 2021-08-10 华南理工大学 Dual-mode mask person event automatic detection method based on video feature statistics
CN109558837B (en) * 2018-11-28 2024-03-22 北京达佳互联信息技术有限公司 Face key point detection method, device and storage medium
CN109784256A (en) * 2019-01-07 2019-05-21 腾讯科技(深圳)有限公司 Face identification method and device, storage medium and electronic device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101440676A (en) * 2008-12-22 2009-05-27 北京中星微电子有限公司 Intelligent anti-theft door lock based on cam and warning processing method thereof
CN102129695A (en) * 2010-01-19 2011-07-20 中国科学院自动化研究所 Target tracking method based on modeling of occluder under condition of having occlusion
CN104318211A (en) * 2014-10-17 2015-01-28 中国传媒大学 Anti-shielding face tracking method
CN105678288A (en) * 2016-03-04 2016-06-15 北京邮电大学 Target tracking method and device
CN105913028A (en) * 2016-04-13 2016-08-31 华南师范大学 Face tracking method and face tracking device based on face++ platform
CN106485215A (en) * 2016-09-29 2017-03-08 西交利物浦大学 Face occlusion detection method based on depth convolutional neural networks
CN108932456A (en) * 2017-05-23 2018-12-04 北京旷视科技有限公司 Face identification method, device and system and storage medium
CN108319953A (en) * 2017-07-27 2018-07-24 腾讯科技(深圳)有限公司 Occlusion detection method and device, electronic equipment and the storage medium of target object
CN107451453A (en) * 2017-07-28 2017-12-08 广东欧珀移动通信有限公司 Solve lock control method and Related product
CN108197604A (en) * 2018-01-31 2018-06-22 上海敏识网络科技有限公司 Fast face positioning and tracing method based on embedded device
CN108875534A (en) * 2018-02-05 2018-11-23 北京旷视科技有限公司 Method, apparatus, system and the computer storage medium of recognition of face
CN108492315A (en) * 2018-02-09 2018-09-04 湖南华诺星空电子技术有限公司 A kind of dynamic human face tracking
CN108898087A (en) * 2018-06-22 2018-11-27 腾讯科技(深圳)有限公司 Training method, device, equipment and the storage medium of face key point location model
CN109299658A (en) * 2018-08-21 2019-02-01 腾讯科技(深圳)有限公司 Face area detecting method, face image rendering method, device and storage medium
CN109635752A (en) * 2018-12-12 2019-04-16 腾讯科技(深圳)有限公司 Localization method, face image processing process and the relevant apparatus of face key point

Also Published As

Publication number Publication date
CN110263695A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN110798790B (en) Microphone abnormality detection method, device and storage medium
CN109815150B (en) Application testing method and device, electronic equipment and storage medium
CN110992493A (en) Image processing method, image processing device, electronic equipment and storage medium
CN108965922B (en) Video cover generation method and device and storage medium
CN111127509B (en) Target tracking method, apparatus and computer readable storage medium
CN110839128B (en) Photographing behavior detection method and device and storage medium
CN110769313B (en) Video processing method and device and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN111880888B (en) Preview cover generation method and device, electronic equipment and storage medium
CN112907725A (en) Image generation method, image processing model training method, image processing device, and image processing program
CN112084811A (en) Identity information determining method and device and storage medium
CN113763228A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111754386A (en) Image area shielding method, device, equipment and storage medium
CN113627413A (en) Data labeling method, image comparison method and device
CN110263695B (en) Face position acquisition method and device, electronic equipment and storage medium
CN111753606A (en) Intelligent model upgrading method and device
CN111931712A (en) Face recognition method and device, snapshot machine and system
CN111354378A (en) Voice endpoint detection method, device, equipment and computer storage medium
CN110992954A (en) Method, device, equipment and storage medium for voice recognition
CN111860064A (en) Target detection method, device and equipment based on video and storage medium
CN113592874B (en) Image display method, device and computer equipment
CN111063372B (en) Method, device and equipment for determining pitch characteristics and storage medium
CN112015612B (en) Method and device for acquiring stuck information
CN111723615B (en) Method and device for judging matching of detected objects in detected object image
CN111757146B (en) Method, system and storage medium for video splicing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant