WO2018103416A1 - 用于人脸图像的检测方法和装置 - Google Patents

用于人脸图像的检测方法和装置 Download PDF

Info

Publication number
WO2018103416A1
WO2018103416A1 PCT/CN2017/103289 CN2017103289W WO2018103416A1 WO 2018103416 A1 WO2018103416 A1 WO 2018103416A1 CN 2017103289 W CN2017103289 W CN 2017103289W WO 2018103416 A1 WO2018103416 A1 WO 2018103416A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
key point
current
face
image
Prior art date
Application number
PCT/CN2017/103289
Other languages
English (en)
French (fr)
Inventor
杨铭
Original Assignee
广州视源电子科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州视源电子科技股份有限公司 filed Critical 广州视源电子科技股份有限公司
Publication of WO2018103416A1 publication Critical patent/WO2018103416A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole

Definitions

  • Embodiments of the present invention relate to the field of image processing technologies, and in particular, to a method and apparatus for detecting a face image.
  • face verification technology has become increasingly mature, and commercial applications have become more widespread.
  • face information can be easily copied in the form of photos, videos, etc., so the counterfeiting behavior of legitimate user faces is an important threat to the security of the face verification system.
  • some progress has been made in living face detection technology.
  • the features of the face itself are often matched with the pre-existing features, and the detection results are performed according to the matching result, and the interaction is poor, and the photos are passed. It is also possible to detect success.
  • the existing methods can be roughly divided into three categories.
  • One is a method based on texture information.
  • the advantage of the method is that it is easy to implement and does not require user cooperation.
  • the disadvantage is that the texture cannot be processed less, and the data diversity is high;
  • the second is based on the action information method.
  • the advantage of this method is that it does not depend on the image texture, and it is difficult to attack with two-dimensional images, and there is no need for user cooperation.
  • the disadvantage is that video input is required, and it is difficult to ensure the detection effect when the motion in the video is not obvious.
  • the third is based on the life feature method, which has the advantage of being able to resist the attack of 2D images and 3D molds at the same time, and is not affected by the image texture condition.
  • the disadvantage is that the user needs cooperation, and the user The understanding of the action instructions is different, and the actual response is different (such as when the system prompts the mouth to open, the degree of mouth opening is different), so the algorithm is difficult to ensure the ideal misrecognition rate and rejection rate at the same time.
  • the present invention provides a method and apparatus for detecting a face image, which overcomes the ambiguity of the motion instruction based on the visual feedback mechanism, so that the face recognition algorithm can simultaneously achieve a lower false positive rate and rejection. rate.
  • an embodiment of the present invention provides a method for detecting a face image, the method comprising: acquiring a first face image of a current moment; and acquiring a key point of the first facial image. a current position; determining a target position of the key point according to the first current position of the key point and a preset standard face target image; and displaying a motion process of the key point from the first current position to the corresponding target position.
  • the method further includes: acquiring a second face image of the next moment; acquiring a second current location of the key point in the second facial image; determining a second current location and location of the keypoint Determining the distance between the target positions; if the distance between the second current position and the target position is less than or equal to the distance threshold, determining that the detection of the face image is legal; otherwise, continuing to obtain the face image of the subsequent time, Until the detection of the face image is legal.
  • the current i-th key point position is x i
  • the target position is y i .
  • the method further includes determining that the current face image detection ends if the face image is detected to be legal for a preset number of times.
  • key points include key points of the facial features in the face image and/or key points of the face contour.
  • an embodiment of the present invention provides a device for detecting a face image, the device comprising: a first image acquiring unit, configured to acquire a first face image at a current time; and a first location acquiring unit, Corresponding to the first image acquiring unit, configured to acquire a first current position of a key point in the first face image; a target position determining unit, connected to the first location acquiring unit, according to the Determining a target position of the key point by a first current position of the key point and a preset standard face target image; and an operation unit connected to the target position determining unit for displaying the key point from the first current position to The movement process corresponding to the target position.
  • the device further includes: a second image acquiring unit, connected to the first image acquiring unit, configured to acquire a second face image at a next moment; a second location acquiring unit, and the second image
  • the acquiring unit is connected to acquire a second current position of the key point in the second face image;
  • the distance determining unit, and the second position obtaining unit are configured to determine a second current position of the key point and a distance between the target positions;
  • a detecting unit connected to the distance determining unit, configured to determine the current face image if a distance between the second current position and the target position is less than or equal to a distance threshold The detection is legal; otherwise, the face image at the subsequent time is continuously obtained until the detection of the face image is legal.
  • the current i-th key point position is x i
  • the target position is y i .
  • the device further includes: a determining unit, connected to the detecting unit, configured to determine that the current face image detection ends if the face image is detected to be legal for a preset preset number of times.
  • key points include key points of the facial features in the face image and/or key points of the face contour.
  • the first current position of the key point in the first face image is obtained, and the target position of the key point is determined according to the first current position of the key point and the preset standard face target image, and the target position is displayed.
  • the movement process of the key point from the first current position to the corresponding target position.
  • the visual feedback mechanism is used to overcome the ambiguity of the motion instruction, which reduces the probability of the face being attacked by photos and videos in the face detection, and obtains a lower false positive rate and rejection rate.
  • FIG. 1 is a flowchart of a method for detecting a face image according to Embodiment 1 of the present invention
  • FIG. 2 is a flowchart of a method for detecting a face image according to Embodiment 2 of the present invention
  • FIG. 3 is a schematic diagram of generating a target position of each key point in a face image according to Embodiment 2 of the present invention.
  • FIG. 4 is a structural diagram of a device for detecting a face image according to a third embodiment of the present invention.
  • the method and device for detecting a face image can be run on a Windows (an operating system platform developed by Microsoft Corporation), Android (an operating system platform developed by Google Inc. for a portable mobile smart device), and iOS (In the terminal of an operating system platform developed by Apple for portable mobile smart devices, and an operating system such as Windows Phone (an operating system platform developed by Microsoft Corporation for portable mobile smart devices), the terminal may be a desktop computer. Any of a laptop, a mobile phone, a palmtop, a tablet, a digital camera, a digital video camera, and the like.
  • FIG. 1 is a flow chart of a method for detecting a face image according to a first embodiment of the present invention, which overcomes the problem of ambiguity of an action instruction by using a visual feedback mechanism, and reduces the face and video of a face in face detection.
  • the probability of attack has a lower rate of misrecognition and rejection.
  • the method may be performed by a device having a detection function of a face image, which may be implemented by software and/or hardware, such as typically a user terminal device such as a cell phone, a computer or the like.
  • a method for detecting a face image in this embodiment includes: step S110, step S120, step S130, and step S140.
  • Step S110 Acquire a first face image of the current time.
  • the first face image of the current moment is collected by the camera, and the algorithm for recognizing the face image can be customized according to requirements.
  • the front camera can be automatically turned on, and The collected face image is displayed.
  • Step S120 Acquire a first current position of a key point in the first face image.
  • the key points in the first face image are obtained according to the first face image, and the key points in the first face image can be accurately located through the key point localization algorithm, and a certain degree of occlusion and multi-angle positioning are supported.
  • Each part of the first face image corresponds to one type of key point.
  • Recognition algorithms based on key points in a face image may include, but are not limited to, a sample-based face shape learning algorithm, a local texture constraint Active appearance model and feature localization based on AdaBoost learning strategy.
  • the key points include key points of the facial features in the face image and/or key points of the face contour.
  • the key points of the face image include key points of the eyes, ears, nose, mouth and eyebrow contours and/or key points of the face contour.
  • Step S130 determining a target position of the key point according to the first current position of the key point and a preset standard face target image.
  • the standard face target image may be preset by the user according to requirements.
  • the preset standard face target image is at least one, and the preset standard face target image may be open mouth 45 degrees or ⁇ Open the eyes to the upper and lower eyelids with a maximum spacing of 1 cm. Determining a target position of the key point according to the first current position of the key point and the preset standard face target image, optionally, according to the first current position of the key point in the face target image (the first of the key points of the mouth) The current position is 30 degrees open for the mouth), and the target position of the mouth key point is determined in combination with the standard face target image (the target position of the key point of the mouth is 45 degrees of the mouth opening).
  • Step S140 showing a motion process of the key point from the first current position to the corresponding target position.
  • the movement process of the key point from the first current position to the corresponding target position is displayed.
  • the first current position of the key point of the mouth is the position of each key point when the mouth is opened 30 degrees
  • the key points of the mouth are
  • the target position is the position of each key point at which the mouth is opened 45 degrees, and the movement process from the first current position of the key point to the hard target position of the mouth is completed.
  • the first current position of the key point in the first face image is obtained, and the target position of the key point is determined according to the first current position of the key point and the preset standard face target image, and the target position is displayed.
  • the movement process of the key point from the first current position to the corresponding target position.
  • a method for detecting a face image includes: step S210, step S220, step S230, step S240, step S250, step S260, step S270, and step S280.
  • Step S210 Acquire a first face image of the current time.
  • Step S220 Acquire a first current position of a key point in the first face image.
  • Step S230 determining a target position of the key point according to the first current position of the key point and a preset standard face target image.
  • Step S240 showing a motion process of the key point from the first current position to the corresponding target position.
  • Step S250 acquiring a second face image of the next moment.
  • the second face image of the next moment is acquired.
  • the second face image of the next moment is collected by the camera, and the algorithm for recognizing the face image can be customized according to the needs.
  • the front camera can be automatically turned on, and the acquisition will be performed. The resulting face image is displayed.
  • Step S260 acquiring a second current position of a key point in the second face image.
  • the second current position of the key point in the second face image is acquired.
  • the key points in the second face image include key points of the facial features in the face image and/or key points of the face contour.
  • the key points of the face image include key points of the eyes, ears, nose, mouth and eyebrow contours and/or key points of the face contour.
  • Step S270 determining a distance between the second current position of the key point and the target position.
  • determining a distance between the second current position of the key point and the target position optionally, when the second current position of the key point of the mouth is the position of the key point of the mouth when the mouth is opened 30 degrees, the target position is the mouth piece When the position of the key point of the mouth is opened at 45 degrees, the distance between the key points of the mouth from the position where the mouth is opened by 30 degrees to the position where the mouth is opened by 45 degrees is calculated.
  • FIG. 3 is a schematic diagram of target position generation of each key point in the face image.
  • the key point position represented by the contour 310 and the contour 320 is the position of each target key point
  • the position of the key point represented by the contour 330 and the contour 340 is The second current position of the key point.
  • the system will generate appropriate key point target positions, and according to these target positions, visually draw the target face contours on the screen terminal, and intuitively prompt the user to do the next step. Face movements.
  • the system needs to refer to the detection result of the face key point in the previous step, because the position of these key points implies the facial features of the user and the current facial state, which can be known according to these personality features and states.
  • the key position is more suitable as the next action target.
  • the next step may require the user to open the mouth and open the mouth.
  • the shape can be determined according to the width of the user's mouth and other features, and the target shape is drawn (the target points of each key point of the contour 310 and the key points of the contour 320) to intuitively and clearly inform the user of the next step.
  • the current i-th key point position is x i
  • the target position is y i .
  • the number of key points is N.
  • the current i-th key point is at position x i and the target position is y i . Determining a distance between the second current location of the keypoint and the target location.
  • Step S280 if the distance between the second current position and the target position is less than or equal to the distance threshold, it is determined that the detection of the face image is legal; otherwise, the face image of the subsequent time is continued to be obtained until the face is The detection of the image is legal.
  • set the distance threshold L when passing When the distance between the second current position and the target position of the calculated key point is less than or equal to the distance threshold L, it is determined that the detection of the face image is legal; If the distance between the second current position and the target position of the calculated key point is greater than the distance threshold L, the detection is invalid, and the face image at the subsequent time is continuously acquired until the detection of the secondary face image is legal.
  • the method for detecting a face image further includes determining that the current face image detection ends if the face image is detected to be legal for a preset number of times.
  • the number of times the face image detection is legal is recorded, and the continuous preset number C is set. If the number of consecutive legal times in the face image detection reaches a continuous preset number of times, it is determined that the current face image detection ends.
  • the distance between the second current position of the key point and the target position is determined by the second position of the key point in the second face avatar, and the distance is compared with the distance threshold according to the distance. For example, it is determined whether the detection of the face image is legal, and if it is not legal, the face image at the subsequent time is continuously obtained until the detection of the face image is legal. The legal judgment of face image detection is realized.
  • the detecting method specifically includes: a first image acquiring unit 410, a first position acquiring unit 420, a target position determining unit 430, and an operating unit 440.
  • the first image obtaining unit 410 is configured to acquire a first face image of the current time.
  • the first location acquiring unit 420 is connected to the first image acquiring unit 410 and configured to acquire a first current location of a key point in the first facial image.
  • the target position determining unit 430 is connected to the first position acquiring unit 420, and is configured to determine a target position of the key point according to the first current position of the key point and a preset standard face target image.
  • the operation unit 440 is connected to the target position determining unit 430 for displaying a motion process of the key point from the first current position to the corresponding target position.
  • the device further includes: a second image acquisition unit, a second location acquisition unit, a distance determination unit, and a detection unit.
  • the second image acquiring unit is connected to the first image acquiring unit 410 and configured to acquire the second human face image at the next moment.
  • the second location acquiring unit is connected to the second image acquiring unit, and configured to acquire a second current location of the key point in the second human face image.
  • a distance determining unit configured to determine the second point of the key point The distance between the front position and the target position.
  • a detecting unit configured to be connected to the distance determining unit, configured to determine that the detection of the face image is legal if the distance between the second current position and the target position is less than or equal to the distance threshold; The face image until the detection of the face image is legal.
  • the current i-th key point position is x i
  • the target position is y i .
  • the device further includes a determining unit.
  • the determining unit is connected to the detecting unit, and is configured to determine that the current face image detection ends if the face image is detected to be legal for a preset preset number of times.
  • key points include key points of the facial features in the face image and/or key points of the face contour.
  • the first current position of the key point in the first face image is obtained, and the target position of the key point is determined according to the first current position of the key point and the preset standard face target image, and the target position is displayed.
  • the movement process of the key point from the first current position to the corresponding target position.
  • the visual feedback mechanism is used to overcome the ambiguity of the motion instruction, which reduces the probability of the face being attacked by photos and videos in the face detection, and obtains a lower false positive rate and rejection rate.

Abstract

本发明实施例公开了一种用于人脸图像的检测方法和装置,所述方法包括:获取当前时刻的第一人脸图像;获取所述第一人脸图像中的关键点的第一当前位置;根据所述关键点的第一当前位置和预设的标准人脸目标图像确定所述关键点的目标位置;展示所述关键点从第一当前位置至对应目标位置的运动过程。本发明实施例以可视反馈机制克服动作指令歧义性问题,降低了人脸检测中人脸被照片和视频攻击的几率,获得了较低的误识率和拒识率。

Description

用于人脸图像的检测方法和装置 技术领域
本发明实施例涉及图像处理技术领域,尤其涉及一种用于人脸图像的检测方法和装置。
背景技术
随着计算机技术的发展,人脸验证技术日趋成熟,商业化应用愈加广泛。然而,人脸信息很容易以照片、视频等形式进行复制,因此对合法用户人脸的假冒行为是人脸验证系统安全的重要威胁。近年来,活体人脸检测技术取得了一些进展,现有的人脸图像的检测方法中,往往通过脸部本身的特征与预存的特征进行匹配,根据匹配结果进行检测,互动性差,且通过照片也可能检测成功。
现有的方法大致可以分为三类,一是基于纹理信息的方法,该方法优点是易于实现,并且无需用户配合,缺点是无法处理纹理较少的情况,而且对数据多样性要求较高;二是基于动作信息的方法,该方法优点是不依赖图像纹理的状况,而且难以以二维图像进行攻击,也无需用户配合,缺点是需要视频输入,而且视频中动作不明显时难以保证检测效果,此外可能会被三维模具攻击;三是基于生命特征的方法,该方法优点是能同时抵抗二维图像和三维模具的攻击,且不受图像纹理状况的影响,缺点是需要用户配合,而用户对于动作指令的理解不同,实际反应也不一样(如被系统提示要求张嘴时,各人张嘴的程度不同),因此算法难以同时保证理想的误识率和拒识率。
发明内容
有鉴于此,本发明提出一种用于人脸图像的检测方法和装置,基于可视反馈机制克服动作指令歧义性问题,使得人脸识别的算法能够同时达到较低的误识率和拒识率。
第一方面,本发明实施例提出了一种用于人脸图像的检测方法,所述方法包括:获取当前时刻的第一人脸图像;获取所述第一人脸图像中的关键点的第一当前位置;根据所述关键点的第一当前位置和预设的标准人脸目标图像确定所述关键点的目标位置;展示所述关键点从第一当前位置至对应目标位置的运动过程。
进一步的,所述方法还包括:获取下一时刻的第二人脸图像;获取所述第二人脸图像中的关键点的第二当前位置;确定所述关键点的第二当前位置和所述目标位置之间的距离;如果所述第二当前位置和所述目标位置之间的距离小于等于距离阈值,则确定本次人脸图像的检测合法;否则继续获取后续时刻的人脸图像,直至该次人脸图像的检测合法。
进一步的,通过
Figure PCTCN2017103289-appb-000001
确定所述关键点的第二当前位置和所述目标位置之间的距离;其中,N表示关键点的个数,当前第i个关键点位置为xi,目标位置为yi
进一步的,所述方法还包括:如果连续预设次数检测人脸图像合法,则确定当前人脸图像检测结束。
进一步的,所述关键点包括人脸图像中的五官轮廓的关键点和/或脸庞轮廓的关键点。
第二方面,本发明实施例提供了一种用于人脸图像的检测装置,所述装置包括:第一图像获取单元,用于获取当前时刻的第一人脸图像;第一位置获取单元,与所述第一图像获取单元相连,用于获取所述第一人脸图像中的关键点的第一当前位置;目标位置确定单元,与所述第一位置获取单元相连,用于根据所述关键点的第一当前位置和预设的标准人脸目标图像确定所述关键点的目标位置;操作单元,与所述目标位置确定单元相连,用于展示所述关键点从第一当前位置至对应目标位置的运动过程。
进一步的,所述装置还包括:第二图像获取单元,与所述第一图像获取单元相连,用于获取下一时刻的第二人脸图像;第二位置获取单元,与所述第二图像获取单元相连,用于获取所述第二人脸图像中的关键点的第二当前位置;距离确定单元,与所述第二位置获取单元,用于确定所述关键点的第二当前位置和所述目标位置之间的距离;检测单元,与所述距离确定单元相连,用于如果所述第二当前位置和所述目标位置之间的距离小于等于距离阈值,则确定本次人脸图像的检测合法;否则继续获取后续时刻的人脸图像,直至该次人脸图像的检测合法。
进一步的,通过
Figure PCTCN2017103289-appb-000002
确定所述关键点的第二当前位置和所述目标位置之间的距离;其中,N表示关键点的个数,当前第i个关键点位置为xi,目标位置为yi
进一步的,所述装置还包括:判断单元,与所述检测单元相连,用于如果连续预设次数检测人脸图像合法,则确定当前人脸图像检测结束。
进一步的,所述关键点包括人脸图像中的五官轮廓的关键点和/或脸庞轮廓的关键点。
本发明实施例中,通过获取第一人脸图像中关键点的第一当前位置,并根据关键点的第一当前位置和预设的标准人脸目标图像确定所述关键点的目标位置,展示所述关键点从第一当前位置至对应目标位置的运动过程。以可视反馈机制克服动作指令歧义性问题,降低了人脸检测中人脸被照片和视频攻击的几率,获得了较低的误识率和拒识率。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:
图1是本发明实施例一中的一种用于人脸图像的检测方法的流程图;
图2是本发明实施例二中的一种用于人脸图像的检测方法的流程图;
图3是本发明实施例二中的人脸图像中各关键点目标位置生成的示意图;
图4是本发明实施例三中的一种用于人脸图像的检测装置的结构图。
具体实施方式
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本发明,而非对本发明的限定。还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部内容。另外还需要说明的是,为了便于说明,以下实施例中示出了与本发明相关的示例,这些示例仅作为说明本发明实施例的原理所用,并不作为对本发明实施例的限定,同时,这些示例的具体数值会根据不同的应用环境和装置或者组件的参数不同而不同。
本发明实施例的人脸图像的检测方法和装置可以运行于安装有Windows(微软公司开发的操作系统平台)、Android(谷歌公司开发的用于便携式可移动智能设备的操作系统平台)、iOS(苹果公司开发的用于便携式可移动智能设备的操作系统平台)、Windows Phone(微软公司开发的用于便携式可移动智能设备的操作系统平台)等操作系统的终端中,该终端可以是台式机、笔记本电脑、移动电话、掌上电脑、平板电脑、数码相机、数码摄像机等等中的任意一种。
实施例一
图1是本发明实施例一中的一种用于人脸图像的检测方法的流程图,该方法以可视反馈机制克服动作指令歧义性问题,降低了人脸检测中人脸被照片和视频攻击的几率,获得了较低的误识率和拒识率。该方法可以由具有人脸图像的检测功能的装置来执行,该装置可以由软件和/或硬件方式实现,例如典型的是用户终端设备,例如手机、电脑等。本实施例中的一种用于人脸图像的检测方法包括:步骤S110、步骤S120、步骤S130和步骤S140。
步骤S110,获取当前时刻的第一人脸图像。
具体的,通过摄像头采集当前时刻的第一人脸图像,识别人脸图像的算法可以根据需要自定义,可选的,采集当前时刻的第一人脸图像时,可自动开启前置摄像头,将采集到的人脸图像进行显示。
步骤S120,获取所述第一人脸图像中的关键点的第一当前位置。
具体的,根据第一人脸图像获取第一人脸图像中的关键点,通过关键点定位算法可以精确定位第一人脸图像中的关键点,支持一定程度遮挡及多角度定位。第一人脸图像中每个部位对应一种类型的关键点。基于人脸图像中的关键点的识别算法可以包括但不限于基于样本的人脸形状学习算法、局部纹理约束 的主动表观模型和基于AdaBoost学习策略的特征定位。
可选的,所述关键点包括人脸图像中的五官轮廓的关键点和/或脸庞轮廓的关键点。其中,人脸图像的关键点包括眼睛、耳朵、鼻子、嘴巴和眉毛轮廓的关键点和/或脸庞轮廓的关键点。
步骤S130,根据所述关键点的第一当前位置和预设的标准人脸目标图像确定所述关键点的目标位置。
具体的,标准的人脸目标图像可以是用户根据需求预设,可选的,预设的标准人脸目标图像至少为一个,预设的标准人脸目标图像可以是张开嘴巴45度或睁开眼睛至上下眼皮最大间距为1厘米等。根据关键点的第一当前位置和预设的标准人脸目标图像确定关键点的目标位置,可选的,根据人脸目标图像中的嘴巴关键点的第一当前位置(嘴巴关键点的第一当前位置为嘴巴张开30度),结合标准人脸目标图像确定嘴巴关键点的目标位置(嘴巴关键点的目标位置为嘴巴张开45度)。
步骤S140,展示所述关键点从第一当前位置至对应目标位置的运动过程。
具体的,展示关键点从第一当前位置至对应目标位置的移动过程,可选的,当嘴巴关键点的第一当前位置为嘴巴张开30度时各关键点的位置,嘴巴各关键点的目标位置是嘴巴张开45度的各关键点的位置,完成从关键点的第一当前位置至嘴硬目标位置的运动过程。
本发明实施例中,通过获取第一人脸图像中关键点的第一当前位置,并根据关键点的第一当前位置和预设的标准人脸目标图像确定所述关键点的目标位置,展示所述关键点从第一当前位置至对应目标位置的运动过程。以可视反馈机制克服动作指令歧义性问题,降低了人脸检测中人脸被照片和视频攻击的几 率,获得了较低的误识率和拒识率。
实施例二
图2是本发明实施例二中的一种用于人脸图像的检测方法的流程图,本实施例在实施例一的基础上,该方法还包括,获取下一时刻的第二人脸图像;获取所述第二人脸图像中的关键点的第二当前位置;确定所述关键点的第二当前位置和所述目标位置之间的距离;如果所述第二当前位置和所述目标位置之间的距离小于等于距离阈值,则确定本次人脸图像的检测合法;否则继续获取后续时刻的人脸图像,直至该次人脸图像的检测合法。具体的,本发明实施例中的一种用于人脸图像的检测的方法包括:步骤S210、步骤S220、步骤S230、步骤S240、步骤S250、步骤S260,步骤S270和步骤S280。
步骤S210,获取当前时刻的第一人脸图像。
步骤S220,获取所述第一人脸图像中的关键点的第一当前位置。
步骤S230,根据所述关键点的第一当前位置和预设的标准人脸目标图像确定所述关键点的目标位置。
步骤S240,展示所述关键点从第一当前位置至对应目标位置的运动过程。
步骤S250,获取下一时刻的第二人脸图像。
具体的,在关键点从第一当前位置至对应的目标位置的运动过程中,获取下一时刻的第二人脸图像。通过摄像头采集下一时刻的第二人脸图像,识别人脸图像的算法可以根据需要自定义,可选的,采集下一时刻的第二人脸图像时,可自动开启前置摄像头,将采集到的人脸图像进行显示。
步骤S260,获取所述第二人脸图像中的关键点的第二当前位置。
具体的,在关键点从第一当前位置至对应的目标位置的运动过程中,获取第二人脸图像中的关键点的第二当前位置。可选的,第二人脸图像中的关键点包括人脸图像中的五官轮廓的关键点和/或脸庞轮廓的关键点。其中,人脸图像的关键点包括眼睛、耳朵、鼻子、嘴巴和眉毛轮廓的关键点和/或脸庞轮廓的关键点。
步骤S270,确定所述关键点的第二当前位置和所述目标位置之间的距离。
具体的,确定关键点的第二当前位置和目标位置之间的距离,可选的,当嘴巴关键点的第二当前位置为嘴巴张开30度时嘴巴关键点的位置,目标位置为嘴巴张开45度时嘴巴关键点的位置,则计算嘴巴各关键点从嘴巴张开30度的位置至嘴巴张开45度的位置之间的距离。
例如,图3是一种人脸图像中各关键点目标位置生成的示意图,轮廓310和轮廓320代表的关键点位置为各目标关键点的位置,轮廓330和轮廓340代表的关键点的位置为关键点的第二当前位置。为了明确用户下一步应当配合做的动作,系统将生成合适的关键点目标位置,并根据这些目标位置在屏幕终端以可视化的方式绘制出目标人脸轮廓,直观形象地提示用户下一步要做的脸部动作。为了生成合适的关键点目标位置,系统需要参考上一步人脸关键点的检测结果,因为这些关键点的位置暗示了用户的五官特征与当前的脸部状态,根据这些个性特征和状态可以获知哪些关键点位置更适合作为下一步的动作目标。可选的,如图3所示,当用户目前正处于闭嘴状态(轮廓330各关键点和轮廓340各关键点组成的目标位置),则下一步可要求用户张开嘴巴,张嘴的幅度与形状可根据用户嘴巴的宽度以及其它特征确定,并将目标形状绘制出来(轮廓310各关键点和轮廓320各关键点组成的目标位置),以直观明确地告知用户下一步应 当做什么动作。
优选的,通过
Figure PCTCN2017103289-appb-000003
确定所述关键点的第二当前位置和所述目标位置之间的距离;其中,N表示关键点的个数,当前第i个关键点位置为xi,目标位置为yi
具体的,关键点的个数为N。当前第i个关键点的位置为xi,目标位置为yi,通过
Figure PCTCN2017103289-appb-000004
确定所述关键点的第二当前位置和所述目标位置之间的距离。可选的,当有50个关键点时,通过计算这50个关键点中每个关键点的位置和其对应的目标位置之间的欧式距离的和,即
Figure PCTCN2017103289-appb-000005
步骤S280,如果所述第二当前位置和所述目标位置之间的距离小于等于距离阈值,则确定本次人脸图像的检测合法;否则继续获取后续时刻的人脸图像,直至该次人脸图像的检测合法。
具体的,设定距离阈值L,当通过
Figure PCTCN2017103289-appb-000006
计算得来的关键点的第二当前位置和目标位置之间的距离小于等于距离阈值L时,则确定本次人脸图像的检测合法;当通过
Figure PCTCN2017103289-appb-000007
计算得来的关键点的第二当前位置和目标位置之间的距离大于距离阈值L时,则检测不合法,继续获取后续时刻的人脸图像,直至该次人脸图像的检测合法。
优选的,用于人脸图像的检测方法还包括:如果连续预设次数检测人脸图像合法,则确定当前人脸图像检测结束。
具体的,记录人脸图像检测合法的次数,设定连续预设次数C,如果人脸图像检测中连续合法的次数达到连续预设次数,则确定当前人脸图像检测结束。
本发明实施例中,通过第二人脸头像中的关键点的第二位置的获取,确定关键点的第二当前位置和目标位置之间的距离,根据该距离与距离阈值进行比 较,判断该次人脸图像的检测是否合法,若不合法则继续获取后续时刻的人脸图像直至该次人脸图像的检测合法。实现了对人脸图像检测中合法的判断。
实施例三
图4是本发明实施例三中的一种用于人脸图像的检测装置的结构图,该装置适用于执行本发明实施例一和本发明实施例二中提供的一种用于人脸图像的检测方法,该装置具体包括:第一图像获取单元410、第一位置获取单元420、目标位置确定单元430和操作单元440。
第一图像获取单元410,用于获取当前时刻的第一人脸图像。
第一位置获取单元420,与第一图像获取单元410相连,用于获取所述第一人脸图像中的关键点的第一当前位置。
目标位置确定单元430,与第一位置获取单元420相连,用于根据所述关键点的第一当前位置和预设的标准人脸目标图像确定所述关键点的目标位置。
操作单元440,与目标位置确定单元430相连,用于展示所述关键点从第一当前位置至对应目标位置的运动过程。
进一步的,所述装置还包括:第二图像获取单元、第二位置获取单元、距离确定单元和检测单元。
第二图像获取单元,与第一图像获取单元410相连,用于获取下一时刻的第二人脸图像。
第二位置获取单元,与所述第二图像获取单元相连,用于获取所述第二人脸图像中的关键点的第二当前位置。
距离确定单元,与所述第二位置获取单元,用于确定所述关键点的第二当 前位置和所述目标位置之间的距离。
检测单元,与所述距离确定单元相连,用于如果所述第二当前位置和所述目标位置之间的距离小于等于距离阈值,则确定本次人脸图像的检测合法;否则继续获取后续时刻的人脸图像,直至该次人脸图像的检测合法。
进一步的,通过
Figure PCTCN2017103289-appb-000008
确定所述关键点的第二当前位置和所述目标位置之间的距离;其中,N表示关键点的个数,当前第i个关键点位置为xi,目标位置为yi
进一步的,所述装置还包括判断单元。
判断单元,与所述检测单元相连,用于如果连续预设次数检测人脸图像合法,则确定当前人脸图像检测结束。
进一步的,所述关键点包括人脸图像中的五官轮廓的关键点和/或脸庞轮廓的关键点。
本发明实施例中,通过获取第一人脸图像中关键点的第一当前位置,并根据关键点的第一当前位置和预设的标准人脸目标图像确定所述关键点的目标位置,展示所述关键点从第一当前位置至对应目标位置的运动过程。以可视反馈机制克服动作指令歧义性问题,降低了人脸检测中人脸被照片和视频攻击的几率,获得了较低的误识率和拒识率。
显然,本领域技术人员应该明白,上述产品可执行本发明任意实施例所提供的方法,具备执行方法相应的功能模块和有益效果。
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进 行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。

Claims (10)

  1. 一种用于人脸图像的检测方法,其特征在于,包括:
    获取当前时刻的第一人脸图像;
    获取所述第一人脸图像中的关键点的第一当前位置;
    根据所述关键点的第一当前位置和预设的标准人脸目标图像确定所述关键点的目标位置;
    展示所述关键点从第一当前位置至对应目标位置的运动过程。
  2. 根据权利要求1所述的用于人脸图像的检测方法,其特征在于,还包括:
    获取下一时刻的第二人脸图像;
    获取所述第二人脸图像中的关键点的第二当前位置;
    确定所述关键点的第二当前位置和所述目标位置之间的距离;
    如果所述第二当前位置和所述目标位置之间的距离小于等于距离阈值,则确定本次人脸图像的检测合法;否则继续获取后续时刻的人脸图像,直至该次人脸图像的检测合法。
  3. 根据权利要求2所述的用于人脸图像的检测方法,其特征在于,通过
    Figure PCTCN2017103289-appb-100001
    确定所述关键点的第二当前位置和所述目标位置之间的距离;其中,N表示关键点的个数,当前第i个关键点位置为xi,目标位置为yi
  4. 根据权利要求2所述的用于人脸图像的检测方法,其特征在于,还包括:
    如果连续预设次数检测人脸图像合法,则确定当前人脸图像检测结束。
  5. 根据权利要求1至4任一项所述的用于人脸图像的检测方法,其特征在于,所述关键点包括人脸图像中的五官轮廓的关键点和/或脸庞轮廓的关键点。
  6. 一种用于人脸图像的检测装置,其特征在于,包括:
    第一图像获取单元,用于获取当前时刻的第一人脸图像;
    第一位置获取单元,与所述第一图像获取单元相连,用于获取所述第一人脸图像中的关键点的第一当前位置;
    目标位置确定单元,与所述第一位置获取单元相连,用于根据所述关键点的第一当前位置和预设的标准人脸目标图像确定所述关键点的目标位置;
    操作单元,与所述目标位置确定单元相连,用于展示所述关键点从第一当前位置至对应目标位置的运动过程。
  7. 根据权利要求6所述的用于人脸图像的检测装置,其特征在于,还包括:
    第二图像获取单元,与所述第一图像获取单元相连,用于获取下一时刻的第二人脸图像;
    第二位置获取单元,与所述第二图像获取单元相连,用于获取所述第二人脸图像中的关键点的第二当前位置;
    距离确定单元,与所述第二位置获取单元,用于确定所述关键点的第二当前位置和所述目标位置之间的距离;
    检测单元,与所述距离确定单元相连,用于如果所述第二当前位置和所述目标位置之间的距离小于等于距离阈值,则确定本次人脸图像的检测合法;否则继续获取后续时刻的人脸图像,直至该次人脸图像的检测合法。
  8. 根据权利要求7所述的用于人脸图像的检测装置,其特征在于,通过
    Figure PCTCN2017103289-appb-100002
    确定所述关键点的第二当前位置和所述目标位置之间的距离;其中,N表示关键点的个数,当前第i个关键点位置为xi,目标位置为yi
  9. 根据权利要求7所述的用于人脸图像的检测装置,其特征在于,还包括:
    判断单元,与所述检测单元相连,用于如果连续预设次数检测人脸图像合法,则确定当前人脸图像检测结束。
  10. 根据权利要求6至9任一项所述的用于人脸图像的检测装置,其特征在于,所述关键点包括人脸图像中的五官轮廓的关键点和/或脸庞轮廓的关键点。
PCT/CN2017/103289 2016-12-06 2017-09-25 用于人脸图像的检测方法和装置 WO2018103416A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611109652.4 2016-12-06
CN201611109652.4A CN106778574A (zh) 2016-12-06 2016-12-06 用于人脸图像的检测方法和装置

Publications (1)

Publication Number Publication Date
WO2018103416A1 true WO2018103416A1 (zh) 2018-06-14

Family

ID=58878253

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/103289 WO2018103416A1 (zh) 2016-12-06 2017-09-25 用于人脸图像的检测方法和装置

Country Status (2)

Country Link
CN (1) CN106778574A (zh)
WO (1) WO2018103416A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110610127A (zh) * 2019-08-01 2019-12-24 平安科技(深圳)有限公司 人脸识别方法、装置、存储介质及电子设备
CN111063011A (zh) * 2019-12-16 2020-04-24 北京蜜莱坞网络科技有限公司 一种人脸图像处理方法、装置、设备和介质
CN111709288A (zh) * 2020-05-15 2020-09-25 北京百度网讯科技有限公司 人脸关键点检测方法、装置以及电子设备

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778574A (zh) * 2016-12-06 2017-05-31 广州视源电子科技股份有限公司 用于人脸图像的检测方法和装置
CN108537278B (zh) * 2018-04-10 2019-07-16 中国人民解放军火箭军工程大学 一种多源信息融合单目标位置确定方法及系统
CN112287909B (zh) * 2020-12-24 2021-09-07 四川新网银行股份有限公司 一种随机生成检测点和交互要素的双随机活体检测方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030198368A1 (en) * 2002-04-23 2003-10-23 Samsung Electronics Co., Ltd. Method for verifying users and updating database, and face verification system using the same
CN105260726A (zh) * 2015-11-11 2016-01-20 杭州海量信息技术有限公司 基于人脸姿态控制的交互式视频活体检测方法及其系统
CN106778574A (zh) * 2016-12-06 2017-05-31 广州视源电子科技股份有限公司 用于人脸图像的检测方法和装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4309926B2 (ja) * 2007-03-13 2009-08-05 アイシン精機株式会社 顔特徴点検出装置、顔特徴点検出方法及びプログラム
CN103631370B (zh) * 2012-08-28 2019-01-25 腾讯科技(深圳)有限公司 一种控制虚拟形象的方法及装置
CN103679159B (zh) * 2013-12-31 2017-10-17 海信集团有限公司 人脸识别方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030198368A1 (en) * 2002-04-23 2003-10-23 Samsung Electronics Co., Ltd. Method for verifying users and updating database, and face verification system using the same
CN105260726A (zh) * 2015-11-11 2016-01-20 杭州海量信息技术有限公司 基于人脸姿态控制的交互式视频活体检测方法及其系统
CN106778574A (zh) * 2016-12-06 2017-05-31 广州视源电子科技股份有限公司 用于人脸图像的检测方法和装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110610127A (zh) * 2019-08-01 2019-12-24 平安科技(深圳)有限公司 人脸识别方法、装置、存储介质及电子设备
CN110610127B (zh) * 2019-08-01 2023-10-27 平安科技(深圳)有限公司 人脸识别方法、装置、存储介质及电子设备
CN111063011A (zh) * 2019-12-16 2020-04-24 北京蜜莱坞网络科技有限公司 一种人脸图像处理方法、装置、设备和介质
CN111063011B (zh) * 2019-12-16 2023-06-23 北京蜜莱坞网络科技有限公司 一种人脸图像处理方法、装置、设备和介质
CN111709288A (zh) * 2020-05-15 2020-09-25 北京百度网讯科技有限公司 人脸关键点检测方法、装置以及电子设备

Also Published As

Publication number Publication date
CN106778574A (zh) 2017-05-31

Similar Documents

Publication Publication Date Title
WO2018103416A1 (zh) 用于人脸图像的检测方法和装置
US10339402B2 (en) Method and apparatus for liveness detection
US11341769B2 (en) Face pose analysis method, electronic device, and storage medium
US10990803B2 (en) Key point positioning method, terminal, and computer storage medium
CN107066983B (zh) 一种身份验证方法及装置
TWI751161B (zh) 終端設備、智慧型手機、基於臉部識別的認證方法和系統
TWI714225B (zh) 注視點判斷方法和裝置、電子設備和電腦儲存介質
Hassner et al. Effective face frontalization in unconstrained images
US9985963B2 (en) Method and system for authenticating liveness face, and computer program product thereof
Rudovic et al. Coupled Gaussian processes for pose-invariant facial expression recognition
JP2018160237A (ja) 顔認証方法及び装置
JP2018165980A (ja) 顔認証方法及び装置
US11790494B2 (en) Facial verification method and apparatus based on three-dimensional (3D) image
WO2020140723A1 (zh) 人脸动态表情的检测方法、装置、设备及存储介质
WO2017000218A1 (zh) 活体检测方法及设备、计算机程序产品
WO2020124993A1 (zh) 活体检测方法、装置、电子设备及存储介质
WO2017092573A1 (zh) 一种基于眼球跟踪的活体检测的方法、装置及系统
US20230306792A1 (en) Spoof Detection Based on Challenge Response Analysis
WO2020124994A1 (zh) 活体检测方法、装置、电子设备及存储介质
JP2011186576A (ja) 動作認識装置
WO2019000817A1 (zh) 手势识别控制方法和电子设备
WO2023168957A1 (zh) 姿态确定方法、装置、电子设备、存储介质及程序
WO2020164284A1 (zh) 基于平面检测的活体识别方法、装置、终端及存储介质
WO2017000217A1 (zh) 活体检测方法及设备、计算机程序产品
Krisandria et al. Hog-based hand gesture recognition using Kinect

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17878120

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 11.10.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17878120

Country of ref document: EP

Kind code of ref document: A1