WO2017000213A1 - 活体检测方法及设备、计算机程序产品 - Google Patents

活体检测方法及设备、计算机程序产品 Download PDF

Info

Publication number
WO2017000213A1
WO2017000213A1 PCT/CN2015/082815 CN2015082815W WO2017000213A1 WO 2017000213 A1 WO2017000213 A1 WO 2017000213A1 CN 2015082815 W CN2015082815 W CN 2015082815W WO 2017000213 A1 WO2017000213 A1 WO 2017000213A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
objects
virtual object
living body
display
Prior art date
Application number
PCT/CN2015/082815
Other languages
English (en)
French (fr)
Inventor
曹志敏
陈可卿
贾开
Original Assignee
北京旷视科技有限公司
北京小孔科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京旷视科技有限公司, 北京小孔科技有限公司 filed Critical 北京旷视科技有限公司
Priority to US15/738,500 priority Critical patent/US20180211096A1/en
Priority to PCT/CN2015/082815 priority patent/WO2017000213A1/zh
Priority to CN201580000356.8A priority patent/CN105518582B/zh
Publication of WO2017000213A1 publication Critical patent/WO2017000213A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Definitions

  • the present disclosure relates to the field of face recognition technology, and more particularly to a living body detection method and apparatus, and a computer program product.
  • face recognition systems are increasingly used in online scenarios requiring authentication in security, finance, and social security fields, such as online bank account opening, online transaction operation verification, unattended access control systems, and online social security. Online medical insurance, etc.
  • face recognition systems in addition to ensuring that the verifier's face similarity matches the database stored in the database, it is first necessary to verify that the verifier is a legitimate biological living organism. That is to say, the face recognition system needs to be able to prevent an attacker from using a photo, a video, a 3D face model, or a mask to attack.
  • Embodiments of the present disclosure provide a living body detecting method and apparatus, and a computer program product capable of controlling a virtual object display based on a face motion, and determining that the living body detection is successful if the virtual object display satisfies a predetermined condition.
  • a living body detecting method includes: detecting a face motion from a captured image; displaying a virtual object on a display screen according to the detected face motion control; and the virtual object When the predetermined condition is satisfied, it is determined that the face in the captured image is a living human face.
  • a living body detecting apparatus including: a face motion detecting device configured to detect a face motion from a captured image; and a virtual object control device configured to detect the detected The face motion control displays a virtual object on the display device; and the living body determining device is configured to determine the photographing map if the virtual object satisfies a predetermined condition The face in the image is a living face.
  • a living body detecting apparatus includes: one or more processors; one or more memories; computer program instructions stored in the memory, in which the computer program instructions are provided Performing the following steps when the processor is running: detecting a face motion from the captured image; displaying the virtual object on the display device according to the detected face motion control; and determining if the virtual object satisfies a predetermined condition The face in the captured image is a living face.
  • a computer program product comprising one or more computer readable storage media having stored thereon computer program instructions, the computer program instructions being Performing the following steps when the computer is running: detecting a face motion from the captured image; displaying the virtual object on the display device according to the detected face motion control; and determining the captured image if the virtual object satisfies a predetermined condition
  • the face in the face is a living face.
  • the living body detecting method and apparatus and the computer program product of the embodiments of the present disclosure by controlling the virtual object display based on the face motion and performing the living body detection according to the virtual object display, the photo and video can be effectively prevented without depending on the special hardware device. Attacks in various ways, such as 3D face models or masks, can reduce the cost of living body detection. Further, by identifying a plurality of action attributes in the face action, a plurality of state variables of the virtual object can be controlled, and the virtual object can be caused to change the display state in multiple aspects, for example, causing the virtual object to perform a complex predetermined action. Or causing the virtual object to achieve a display effect that is greatly different from the initial display effect. Therefore, the accuracy of the living body detection can be further improved, and further, the safety of applying the living body detecting method and apparatus according to the embodiment of the present invention and the application scenario of the computer program product can be improved.
  • FIG. 1 is a schematic block diagram of an electronic device for implementing a living body detecting method and apparatus of an embodiment of the present disclosure
  • FIG. 2 is a schematic flow chart of a living body detecting method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of a face motion detecting step in a living body detecting method according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flowchart of a virtual object display control step in a living body detecting method according to an embodiment of the present disclosure
  • FIG. 5 is another schematic flowchart of a living body detecting method according to an embodiment of the present disclosure.
  • 6A-6D and 7A-7B are examples of virtual objects displayed on a display screen in accordance with a first embodiment of the present disclosure
  • 8A and 8B are examples of virtual objects displayed on a display screen according to a second embodiment of the present disclosure.
  • 9A-9E are examples of virtual objects displayed on a display screen according to a third embodiment of the present disclosure.
  • FIGS. 10A-10C are examples of virtual objects displayed on a display screen in accordance with a fourth embodiment of the present disclosure.
  • FIG. 11 is a schematic block diagram of a living body detecting apparatus according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic block diagram of another living body detecting apparatus according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic block diagram of a face motion detecting device in a living body detecting apparatus according to an embodiment of the present disclosure
  • FIG. 14 is a schematic block diagram of a virtual object control device in a living body detecting device according to an embodiment of the present disclosure.
  • the electronic device 100 includes one or more processors 102, one or more memories.
  • Storage device 104, output device 108, and image acquisition device 110 are interconnected by bus system 112 and/or other forms of connection mechanisms (not shown). It should be noted that the components and structures of the electronic device 100 illustrated in FIG. 1 are merely exemplary and not limiting, and the electronic device 100 may have other components and structures as needed.
  • the processor 102 can be a central processing unit (CPU) or other form of processing unit with data processing capabilities and/or instruction execution capabilities, and can control other components in the electronic device 100 to perform desired functions.
  • CPU central processing unit
  • the processor 102 can be a central processing unit (CPU) or other form of processing unit with data processing capabilities and/or instruction execution capabilities, and can control other components in the electronic device 100 to perform desired functions.
  • the storage device 104 can include one or more computer program products, which can include various forms of computer readable storage media, such as volatile memory and/or nonvolatile memory.
  • the volatile memory may include, for example, a random access memory (RAM) and/or a cache or the like.
  • the nonvolatile memory may include, for example, a read only memory (ROM), a hard disk, a flash memory, or the like.
  • One or more computer program instructions can be stored on the computer readable storage medium, and the processor 102 can execute the program instructions to implement the functions (implemented by the processor) of the embodiments of the invention described below and/or Or other desired features.
  • Various applications and various data may also be stored in the computer readable storage medium, such as image data collected by the image capture device 110, and the like, and various data used and/or generated by the application.
  • the output device 108 may output various information (eg, images or sounds) to the outside (eg, a user), and may include one or more of a display, a speaker, and the like.
  • the image capture device 110 may take an image of a predetermined viewing range (eg, photos, videos, etc.) and store the captured images in the storage device 104 for use by other components.
  • a predetermined viewing range eg, photos, videos, etc.
  • the exemplary electronic device 100 for implementing the living body detecting method and apparatus of the embodiments of the present disclosure may be an electronic device integrated with a face image collecting device disposed at a face image collecting end, such as a smartphone, a tablet, an individual.
  • a face image collecting device disposed at a face image collecting end
  • the electronic device 100 can be deployed at an image acquisition end of an access control system, and can be, for example, a face recognition based identification device; in the field of financial applications, it can be deployed at a personal terminal, such as a smart phone. , tablets, personal computers, etc.
  • the output device 108 and the image capture device 110 of the exemplary electronic device 100 for implementing the living body detecting method and apparatus of the embodiments of the present disclosure may be deployed at a face image collecting end, and the processor in the electronic device 100 102 can be deployed on the server side (or cloud).
  • a face motion is detected from the captured image.
  • the other image capturing device of the image captures a grayscale or color image of a predetermined shooting range as a captured image, which may be a photo or a frame in the video.
  • the image capture device may be a camera of a smart phone, a camera of a tablet, a camera of a personal computer, or even a webcam.
  • step S210 The face motion detection in step S210 is described with reference to FIG.
  • a face key point is located in the captured image.
  • it may first be determined whether a captured face is included in the acquired image, and a face key point is located in the case where the face is detected.
  • the key points of the face are some key points on the face, such as the eyes, the corners of the eyes, the center of the eyes, the eyebrows, the highest point of the cheekbones, the nose, the tip of the nose, the nose, the mouth, the corners of the mouth, and the contour points of the face.
  • the series of face key points may include, but is not limited to, at least a portion of the above-described face key points.
  • machine learning algorithms such as Deep Learning or local feature-based regression algorithm
  • face detection and face key point positioning may be performed in the captured image based on the already established face key point model in step S310.
  • the position of the face key point can be iteratively optimized in the captured image, and finally the coordinate position of each face key point is obtained.
  • a method based on cascade regression can be used to locate a face key in a captured image.
  • the positioning of face key points plays an important role in face motion recognition, however, it should be understood that the present disclosure is not limited by the specific face key point positioning method.
  • the face key point positioning in step S310 can be performed using an existing face detection and face key point localization algorithm.
  • the living body detecting method 100 of the embodiment of the present disclosure is not limited to the use of the existing face detection and face key point positioning algorithms for face key point positioning, and should cover the use of face detection and face key developed in the future. Point location algorithm for face key location.
  • image texture information is extracted from the captured image.
  • fine information of a face such as eyeball position information, mouth type information, micro-expression information, and the like, may be extracted according to pixel information in the captured image, such as brightness information of a pixel.
  • the image texture information extraction in step S320 can be performed using an existing image texture information extraction algorithm. It should be understood that the living body detecting method 100 of the embodiment of the present disclosure is not limited to performing image texture information extraction using an existing image texture information extraction algorithm, and should cover image texture information extraction using a future developed image texture information extraction algorithm.
  • steps S310 and S320 may alternatively be performed, or both may be performed. In the case where both of steps S310 and S320 are performed, they may be executed simultaneously or may be performed sequentially.
  • a value of the face action attribute is obtained based on the located face key point and/or the image texture information.
  • the facial motion attribute obtained based on the located face key points may include, for example, but is not limited to, degree of eye closure, degree of mouth opening, degree of face pitch, degree of face deflection, distance of face from camera, and the like.
  • the facial motion attribute obtained based on the image texture information may include, but is not limited to, a degree of left and right eye deflection, an eyeball vertical deflection degree, and the like.
  • the value of the face action attribute may be obtained based on the previous captured image of the current captured image and the current captured image; or the value of the face action attribute may be obtained based on the first captured image and the current captured image; Alternatively, the value of the face action attribute may be obtained based on the current captured image and the first few captured images of the currently captured image.
  • the value of the face action attribute may be obtained based on the located face key points by means of geometric learning, machine learning, or image processing.
  • multiple key points can be defined in one eye, such as 8-20 key points, such as the inner corner of the left eye, the outer corner of the eye, the center point of the upper eyelid and the center point of the lower eyelid, and the right The inner corner of the eye, the outer corner of the eye, the center point of the upper eyelid, and the center point of the lower eyelid.
  • the ratio of the inner and outer corner distances is taken as the first distance ratio X, and the degree of eye closure Y is determined based on the first distance ratio.
  • step S220 the virtual object is displayed on the display screen according to the detected face motion control.
  • the state of the virtual object displayed on the display screen may be changed according to the detected face motion control.
  • the virtual object may include a first set of objects that have been displayed on the display screen in an initial state and may include one or more objects.
  • the display of at least one of the first set of objects on the display screen is updated in accordance with the detected face motion.
  • the initial display position and/or initial display form of at least a portion of the first set of objects is predetermined or randomly determined. Specifically, for example, the motion state, display position, size, shape, color, and the like of the virtual object can be changed.
  • a new virtual object may be displayed on the display screen according to the detected face motion control.
  • the virtual object may further include a second group of objects that are not yet displayed on the display screen and may include one or more objects in an initial state.
  • at least one of the second set of objects is displayed in accordance with the detected face motion.
  • An initial display position and/or an initial display form of at least a portion of the at least one object of the second set of objects is predetermined or randomly determined.
  • step S220 The operation of step S220 will be described with reference to FIG.
  • step S410 the value of the state parameter of the virtual object is updated according to the value of the face action attribute.
  • a face action attribute can be mapped to a certain state parameter of the virtual object.
  • the user's eye degree of closure or degree of mouth opening may be mapped to the size of the virtual object, and the size of the virtual object may be updated according to the value of the user's degree of eye closure or degree of mouth opening.
  • the user's face pitch degree may be mapped to a vertical display position of the virtual object on the display screen, and the vertical display position of the virtual object on the display screen may be updated according to the value of the user's face pitch degree.
  • the ratio K1 of the degree of mouth opening in the current captured image and the degree of mouth opening in the previously captured first captured image may be calculated, and the ratio K1 of the degree of mouth opening is mapped to the size S of the virtual object.
  • the degree K2 in which the face position in the current captured image deviates from the initial center position can be calculated, and the face position is mapped to the position W of the virtual object.
  • the face action attribute may include at least one action attribute
  • the state parameter includes at least one state parameter.
  • An action attribute may correspond to only one state parameter, or an action attribute may correspond to a plurality of state parameters in chronological order.
  • the mapping relationship between the face action attribute and the state parameter of the virtual object may be preset, or may be randomly determined when starting the living body detection method according to an embodiment of the present disclosure.
  • the living body detecting method according to an embodiment of the present disclosure may further include prompting a user with a mapping relationship between the face action attribute and a state parameter of the virtual object.
  • step S420 the virtual object is displayed on the display screen according to the updated value of the state parameter of the virtual object.
  • the virtual object may include a first group of objects, and the first group of objects are displayed on the display screen when the living body detecting method according to an embodiment of the present disclosure starts to be executed, and the first group of faces may be acted upon An attribute to update the display of at least one of the first set of objects.
  • the virtual object may further include a second group of objects, wherein the second group of objects are not displayed on the display screen when the living body detecting method according to the embodiment of the present disclosure is started, and may be performed by interacting with the first group of faces a second set of face action attributes having different attributes to control whether to display at least one of the second set of objects; or controlling whether to display at least one of the second set of objects according to a display condition of the first set of objects An object.
  • the state parameter of at least one of the first group of objects may be a display position, a size, a shape, a color, a motion state, and the like, thereby changing the value according to the value of the first group of face action attributes.
  • the state parameter of each of the at least one object of the second group of objects may include at least a visible state, and may further include a display position, a size, a shape, a color, a motion state, and the like. Controlling whether to display at least one of the second set of objects, that is, the second set of objects, according to a value of the second set of facial motion attributes or a display condition of at least one of the first set of objects Whether at least one object is in a visible state, and may further change at least one of the second set of objects according to a value of the second set of facial motion attributes and/or a value of the first set of facial motion attributes Motion status, display position, size, shape, color, etc.
  • step S230 it is determined whether the virtual object satisfies a predetermined condition.
  • the predetermined condition is a condition related to a form and/or motion of the virtual object, wherein the predetermined condition is predetermined or randomly generated.
  • the form of the virtual object satisfies a condition related to a form
  • the shape of the virtual object may include size, shape, color, etc.
  • the position, the motion trajectory, the motion speed, the motion direction, and the like may be included, and the motion-related condition may include a predetermined display position of the virtual object, a predetermined motion trajectory of the virtual object, and a display position of the virtual object The predetermined display position of the opening, and the like.
  • Whether the virtual object completes a predetermined task may be determined according to an actual motion trajectory of the virtual object, and the predetermined task may include, for example, moving according to a predetermined motion trajectory, bypassing an obstacle movement, or the like.
  • the predetermined condition may be set as: the first object reaches a target display position, The first object reaches a target display size, the first object reaches a target shape, and/or the first object reaches a target display color, and the like.
  • the first group of objects further includes a second object, and an initial display position and/or an initial display form of at least one of the first object and the second object are predetermined or randomly determined.
  • the first object may be a controlled object
  • the second object may be a background object
  • the second object may be a target object of the first object
  • the predetermined condition may be It is set that the first object overlaps with the target object.
  • the background object may be a target motion trajectory of the first object, the target motion trajectory may be randomly generated, and the predetermined condition may be set as: an actual motion trajectory of the first object Corresponding to the target motion trajectory.
  • the background object may be an obstacle object
  • the obstacle object may be randomly displayed, and its display position and display time are random
  • the predetermined condition may be set as follows: the first object is not The obstacle objects meet, ie the first object bypasses the obstacle object.
  • the predetermined condition may also be set as: the first and/or Or the third object reaches a corresponding target display position, the first and/or third object reaches a corresponding target display size, the first and/or third object reaches a corresponding target shape, and/or the The first and/or third object reaches the corresponding target display color and the like.
  • step S240 In a case where the virtual object satisfies a predetermined condition, it is determined in step S240 that the face in the captured image is a living human face. On the other hand, in a case where the virtual object does not satisfy the predetermined condition, it is determined in step S250 that the face in the captured image is not a living human face.
  • the living body detecting method by using various face motion parameters as virtual
  • the state control parameter of the object displays the virtual object on the display screen according to the face motion control, and the living body detection can be performed according to whether the displayed virtual object satisfies a predetermined condition.
  • FIG. 5 illustrates an exemplary flow chart of another living body detection method 500 in accordance with an embodiment of the present disclosure.
  • a timer is initialized.
  • the timer may be initialized according to user input, or the timer may be automatically initialized when a face is detected in the captured image, or may be automatically initialized when a predetermined action of the face is detected in the captured image. Further, at least a portion of each of the first set of objects is displayed on the display screen after the timer is initialized.
  • step S520 an image (first image) of a predetermined shooting range is acquired in real time as a captured image.
  • the other image capturing device of the image captures a grayscale or color image of a predetermined shooting range as a captured image, which may be a photo or a frame in the video.
  • Steps S530-S540 respectively correspond to steps S210-S220 in FIG. 2, and details are not described herein again.
  • step S550 It is determined in step S550 whether the virtual object satisfies a predetermined condition within a predetermined timing time, and the predetermined timing time may be predetermined. Specifically, the step S550 may include determining whether the timer exceeds a predetermined timing time and whether the virtual object satisfies a predetermined condition. Optionally, a timeout flag may be generated when the timer exceeds the predetermined timing time, and whether the timer exceeds the predetermined timing time may be determined according to the timeout flag in step S550.
  • step S550 it is determined in step S560 that the living human face is detected, or it is determined in step S570 that the living human face is not detected, or the process returns to step S520.
  • the image (second image) of the predetermined shooting range is acquired as a captured image in real time, and steps S530-S550 are next performed.
  • the image acquired first is referred to as a first image
  • the image acquired thereafter is referred to as a second image. It should be understood that the first image and the second image are images within the same viewing range, only the time of acquisition is different.
  • Steps S520-S550 shown in FIG. 5 are repeatedly executed until it is determined that the virtual object satisfies the predetermined condition according to the determination result of step S550, thereby determining that the living human face is detected in step S570, or until it is determined in step S520 that the timer is exceeded.
  • the predetermined timing time thus determines in step S580 that no living face is detected.
  • step S550 the determination as to whether or not the timer exceeds the predetermined timing time is performed in step S550 in FIG. 5, it should be understood that the present invention is not limited thereto, and the determination may be performed in any step of the living body detecting method according to an embodiment of the present disclosure.
  • a timeout flag is generated when the timer exceeds a predetermined timing time, and the timeout flag may directly trigger step S560 or S570 of the living body detecting method according to an embodiment of the present disclosure, that is, determine whether a living person is detected face.
  • the virtual object includes a first group of objects, and the first group of objects are displayed on a display screen when starting to perform a living body detecting method according to an embodiment of the present disclosure, and the first group An object consists of one or more objects. Updating display of at least one of the first set of objects on a display screen according to the detected face motion, wherein the at least one of the first set of objects is a controlled object.
  • the initial display position and/or initial display form of at least a portion of the first set of objects is predetermined or randomly determined.
  • the virtual object is a first object
  • the face action attribute includes a first action attribute
  • the state parameter of the first object includes a first state parameter of the first object, according to the The value of the first action attribute updates a value of the first state parameter of the first object, and displays the first object on the display screen according to the updated value of the first state parameter of the first object .
  • the face action attribute further includes a second action attribute
  • the state parameter of the first object further includes a second state parameter of the first object
  • the update is performed according to the value of the second action attribute. a value of a second state parameter of the first object, and displaying the first object on the display screen in accordance with the updated value of the first and second state variables of the first object.
  • the predetermined condition may be that the first object reaches a target display position and/or a target display form, and the target display form may include a target size, a target color, a target shape, and the like. At least one of an initial display position of the first object on the display screen and a target display position of the first object may be randomly determined, an initial display form of the first object on the display screen and the first At least one of the object's target display modalities may be randomly determined.
  • the target display position and/or the target display form may be presented to the user by means such as text, sound, or the like.
  • the first state parameter of the first object is a display position of the first object, and the display position of the first object is controlled according to a value of the first action attribute, where the first object is In the case where the display position coincides with the target display position, it is determined that the living body detection is successful.
  • the initial display position of the first object is randomly determined, and the target display position of the first object may be an upper left corner, an upper right corner, a lower left corner, a lower right corner, or a central position of the display screen.
  • the target display position may be presented to the user by means such as text, sound, or the like.
  • the first object may be the first object A shown in FIG. 6A.
  • the first object when the timer is initialized, at least a portion of the first object is displayed on the display screen, and an initial display position of at least a portion of the first object is randomly determined.
  • the first object may be a virtual face, and the display portion and the display position of the first object are controlled according to the value of the first action attribute, and the display position of the first object and the target display are displayed. In the case where the positions are the same, it is determined that the living body detection is successful.
  • the first object may be the first object A shown in FIG. 6B.
  • the first state parameter of the first object is a size (color or shape) of the first object, and the size (color or shape) of the first object is controlled according to a value of the first action attribute.
  • the size (color or shape) of the first object is the same as the target size (target color or target shape), it is determined that the living body detection is successful.
  • the first object may be the first object A shown in FIG. 6C.
  • the virtual object includes a first object and a second object
  • the face action attribute includes a first action attribute
  • the state parameter of the first object includes a first state of the first object a parameter
  • the state parameter of the second object includes a first state parameter of the second object
  • a value of the first state parameter of the first object is updated according to a value of the first action attribute, and according to the updated
  • the value of the first state parameter of the first object displays the first object on the display screen.
  • the face action attribute further includes a second action attribute
  • the state parameter of the first object further includes a second state parameter of the first object
  • the state parameter of the second object includes the first a second state parameter of the second object, updating a value of the second state parameter of the first object according to the value of the second action attribute, and according to the updated first and second state parameters of the first object A value displays the first object on the display screen.
  • the first object is a controlled object
  • the second object is a background object and is a target object of the first object
  • the predetermined condition may be that the first object coincides with the second object, or the first object reaches a target display position or a target display form, and the target display form may include a target size, a target color, and a target Shape and so on.
  • the display position of the second object is a target display position of the first object
  • the display form of the second object is a target display form of the first object.
  • An initial value of a state parameter of at least one of the first object and the second object may be randomly determined. That is, an initial value of at least one of the state variables of the first object (eg, at least one of display position, size, color, shape) may be randomly determined, and/or the second object
  • the initial value of at least one of the state variables may be randomly determined.
  • at least one of an initial display position of the first object on the display screen and a display position of the second object may be randomly determined, an initial display form of the first object on the display screen and At least one of the target display modalities of the second object may be randomly determined.
  • the first state parameter of the first object A is a display position of the first object A, and the display position of the first object A is controlled according to the value of the first action attribute, in the first object A In the case where the display position coincides with the target display position (the display position of the second object B), it is determined that the living body detection is successful.
  • other state variables of the first object A and the target object B are not judged, such as size, color, shape, etc., regardless of the size of the first object A and the target object B, Whether the color and shape are the same.
  • FIG. 6B An example of the display position of the first object A and the target object B of the first object A is shown in FIG. 6B.
  • the first object A The initial display position of at least a portion of the portion is randomly determined.
  • the first object A may be a controlled virtual human face
  • the second object B is a target virtual human face
  • the display part and display of the first object A are controlled according to the value of the first action attribute.
  • the position is determined to be successful in the case where the display position of the first object A is the same as the target display position (the display position of the second object B).
  • the first state parameter of the first object A is the size (color or shape) of the first object A, and the size (color) of the first object A is controlled according to the value of the first action attribute. Or shape), in the case where the size (color or shape) of the first object A is the same as the target size (target color or target shape) (the size (color or shape) of the second object B), it is determined that the living body detection is successful. .
  • FIG. 6D An example of the display position and size of the first object A and the target object B of the first object A is shown in FIG. 6D, wherein the first state parameter and the second state parameter of the first object A are respectively The display position and the display size of the first object A are respectively displayed, and the first state parameter and the second state parameter of the second object B are the display position and the display size of the second object B, respectively.
  • the display position and the display size of the first object A are controlled according to the face motion, and specifically, the first state of the first object A may be updated according to the value of the first action attribute.
  • a value of the parameter (display position coordinate) and updating a value (size value) of the second state parameter of the first object A according to the value of the second action attribute, according to the first state parameter of the first object A The value and the value of the second state parameter display the first object A on the display screen, in the case where the first object A coincides with the second object B, that is, in the first object A Determining that the face in the captured image is a living face in a case where the display position coincides with the display position of the second object B and the display size of the first object A is the same as the display size of the target object B .
  • the horizontal position and the vertical position of the first object A and the second object B are different.
  • the first action attribute may include the first child.
  • the action attribute and the second sub-action attribute, the first state parameter of the first object A may include a first sub-state parameter and a second sub-state parameter, the value of the first sub-state parameter being the first object A a horizontal position coordinate, the value of the second sub-state parameter is a vertical position coordinate of the first object A, and the first object A may be updated on the display screen according to a value of the first sub-action attribute
  • the horizontal position coordinates on the upper side, and the vertical position coordinates of the first object A on the display screen are updated according to the value of the second sub-action attribute.
  • the first action attribute may be defined as a position of the face in the captured image, and the display of the first object A on the display screen may be updated according to position coordinates of the face in the captured image. position.
  • the first sub-action attribute may be defined as a horizontal position of the face in the captured image and the second sub-action attribute is defined as a vertical position of the face in the captured image, which may be shot according to the face Horizontal position coordinates in the image to update the horizontal position coordinates of the first object A on the display screen, and update the first object A on the display screen according to the vertical position coordinates of the face in the captured image Vertical position coordinates on.
  • the first sub-action attribute may be defined as a degree of facial deflection and the second sub-action attribute may be defined as a degree of facial pitch, and then the first object A may be updated according to the value of the degree of facial deflection.
  • the horizontal position coordinates on the display screen, and the vertical position coordinates of the first object A on the display screen are updated according to the value of the degree of face pitch.
  • the virtual object includes a first object that is a controlled object, the second object is a background object and is a target motion trajectory of the first object.
  • the face action attribute includes a first action attribute
  • the state parameter of the first object includes a first state parameter of the first object
  • the first state parameter of the first object is a display of the first object Positioning, updating a value of the first state parameter of the first object according to the value of the first action attribute, and controlling the first object in accordance with the updated value of the first state parameter of the first object
  • the display position on the display screen is displayed, and the motion trajectory of the first object is controlled accordingly.
  • the virtual object may further include a third object, in which case the second object and the third object form a background object together, and the second object is a target motion track of the first object,
  • the third object is a target object of the first object, and the background object includes a target motion trajectory of the first object and a target object.
  • the state parameter of the third object includes a first state parameter of the third object, and the first state parameter of the third object is a display position of the third object.
  • the first object A, the second object (target object) B, and the third object (target motion locus) C are shown in FIGS. 7A and 7B. At least a portion of the initial display position of the first object A, the display position of the target object B, and the target motion trajectory C may be randomly determined.
  • the state parameter of the target object B may include a first state parameter of the target object B, and the first state parameter of the target object B is the target The display position of the object B.
  • the motion trajectory of the first object A coincides with the target motion trajectory C
  • the display position of the first object A coincides with the display position of the target object B.
  • the living body detection is successful.
  • the state parameter of each target object may include a first state parameter of the target object, ie, a display position.
  • the living body detection may be determined to be successful in a case where the motion trajectory of the first object A sequentially coincides with at least a part of the plurality of pieces of target motion trajectories C. Alternatively, the living body detection may be determined to be successful if the first object A sequentially coincides with at least a part of the plurality of target objects.
  • the motion trajectory of the first object A may sequentially coincide with at least a part of the plurality of target motion trajectories C, and the first object A may coincide with at least a part of the plurality of target objects B in sequence
  • the living body detection is successful.
  • the moving direction of the first object A may include a horizontal moving direction and a vertical moving direction.
  • the first action attribute may include a first sub-action attribute and a second sub-action attribute
  • the first state parameter of the first object A may include a first sub-state parameter and a second sub-state parameter
  • the value of the first sub-state parameter is a horizontal position coordinate of the first object A
  • the value of the second sub-state parameter is a vertical position coordinate of the first object A, which may be according to the first sub-action attribute a value to update the horizontal position coordinates of the first object A on the display screen, and update the vertical position coordinates of the first object A on the display screen according to the value of the second sub-action attribute.
  • the face action attribute further includes a second action attribute
  • the state parameter of the first object further includes a second state parameter of the first object
  • the second state parameter of the first object is a display form (eg, size, color, shape, etc.) of the first object
  • the state parameter of the third object includes a second state parameter of the third object
  • the second state parameter of the third object is a display form (eg, size, color, shape, etc.) of the third object
  • target object B is illustrated as an object having a specific shape in FIGS. 6A, 6C, 6D, 7A, and 7B, it should be understood that the present invention is not limited thereto and may also pass To represent the target object B.
  • step S550 it is determined in step S550 whether the timer exceeds the predetermined timing time, and it is determined whether the first object satisfies a predetermined condition, For example, whether the first object reaches the target display position and/or the target display form, whether the first object coincides with the target object and/or is identical to the display form of the target object, and/or whether the first object achieves the target Movement track.
  • a predetermined condition For example, whether the first object reaches the target display position and/or the target display form, whether the first object coincides with the target object and/or is identical to the display form of the target object, and/or whether the first object achieves the target Movement track.
  • step S550 Determining, in step S550, the timer exceeds the predetermined timing time and the first object In the case where the predetermined condition has not been satisfied, it is determined in step S570 that the living face is not detected.
  • step S550 In a case where it is determined in step S550 that the timer does not exceed the predetermined timing time and the first object satisfies the predetermined condition, it is determined in step S560 that a living human face is detected.
  • step S550 determines that the timer does not exceed the predetermined timing time and the first object does not satisfy the predetermined condition.
  • the virtual object includes a first group of objects, the first group of objects being displayed on a display screen when starting to perform a living body detecting method according to an embodiment of the present disclosure, and the first group An object consists of one or more objects. Updating display of at least one of the first set of objects on a display screen according to the detected face motion, wherein the at least one of the first set of objects is a controlled object.
  • the initial display position and/or initial display form of at least a portion of the first set of objects is predetermined or randomly determined.
  • the first group of objects includes a first object that is a controlled object, the second object is a background object, and the background object is a barrier object,
  • the initial display position and/or initial display form of the first object and the obstacle object are random.
  • the obstacle object may be stationary or may be sporty. In the case where the obstacle object moves, its motion trajectory may be a straight line or a curve, and the obstacle object may move in a vertical direction, move in a horizontal direction, or move in any direction. Optionally, the motion trajectory and the motion direction of the obstacle object are also random.
  • the face action attribute includes a first action attribute
  • the state parameter of the first object includes a first state parameter of the first object
  • the first state parameter of the first object is a display of the first object Position
  • the state parameter of the second object includes a first state parameter of the second object
  • the first state parameter of the second object is a display position of the second object, according to the first action attribute
  • the value updates the value of the first state parameter of the first object and displays the first object on the display screen in accordance with the updated value of the first state parameter of the first object.
  • the predetermined condition may be that the first object does not meet the second object, or a distance between a display position of the first object and a display position of the second object exceeds a predetermined distance, the predetermined The distance may be determined according to a display size of the first object and a display size of the second object.
  • the predetermined condition may be that the first object does not meet the second object within a predetermined time, or the display position of the first object and the display position of the second object The distance between them exceeds the predetermined distance.
  • FIG. 8A An example of the position of the first object A and the obstacle object D is shown in FIG. 8A.
  • the obstacle object D may continuously move on the display screen, and the moving direction of the obstacle object D may be random. If the first object A does not meet the obstacle object D, it is determined that the living body detection is successful. . Preferably, if the first object A and the obstacle object D do not meet each other for a predetermined timing, it is determined that the living body detection is successful. Alternatively, in a case where the first object A and the obstacle object D do not always meet before the obstacle object D moves out of the display screen, it is determined that the living body detection is successful.
  • the first group of objects further includes a third object
  • the first object is a controlled object
  • the second object and the third object constitute a background object
  • the second object is a barrier object
  • the third object is a target object that is randomly displayed or randomly generated.
  • the state parameter of the third object may include a first state parameter of the third object, and the first state parameter of the third object is a display position of the third object.
  • the predetermined condition may be that the first object does not meet the second object and the first object coincides with the third object, or the display position of the first object and the second object
  • the distance between the display positions exceeds a predetermined distance and the first object coincides with the third object, and the predetermined distance may be determined according to a display size of the first object and a display size of the second object.
  • a first object A, a second object (obstacle object) D, and a third object (target object) B are shown in FIG. 8B.
  • the obstacle object D may continuously move on the display screen, and the moving direction of the obstacle object D may be random, the first object A and the obstacle object D do not meet and the first object A and In the case where the target objects B are coincident, it is determined that the living body detection is successful.
  • the first object A does not meet the obstacle object D and the display position of the first object A coincides with the display position of the target object B within a predetermined timing time, it is determined that the living body detection is successful.
  • step S550 it is determined in step S550 whether the timer exceeds the predetermined timing time, and it is determined whether the first object satisfies a predetermined condition.
  • the predetermined condition is that the first object does not meet the obstacle object (FIG. 8A), the first object coincides with the target object (FIG. 8B-1), the first object and the first object The target objects coincide and do not meet the obstacle object (Fig. 8B-2).
  • step S550 it is determined in step S550 that the timer exceeds the predetermined timing In case the time and the first object has not met the obstacle object, it is determined in step S560 that the living human face is detected; in step S550, it is determined that the timer does not exceed the predetermined timing time and the first object If it has not been met with the obstacle object, the process returns to step S520; on the other hand, it is determined in step S550 that the timer does not exceed the predetermined timing time and the first object meets the obstacle object. Next, it is determined in step S570 that the living face is not detected.
  • step S570 determines that the living person is not detected. a face; if it is determined in step S550 that the timer does not exceed the predetermined timing time and the first object coincides with the target object, it is determined in step S560 that a living human face is detected; on the other hand, in step S550 If it is determined that the timer does not exceed the predetermined timing time and the first object does not coincide with the target object, the process returns to step S520.
  • step S550 in a case where it is determined in step S550 that the timer exceeds the predetermined timing time and the first object does not coincide with the target object, or the timer is determined in step S550 If the predetermined timing time is not exceeded and the first object meets the obstacle object, it is determined in step S570 that the living human face is not detected; it is determined in step S550 that the timer does not exceed the predetermined timing time and If the first object coincides with the target object and has not met the obstacle object, it is determined in step S560 that the living human face is detected; on the other hand, it is determined in step S550 that the timer does not exceed the In the case where the predetermined time is predetermined and the first object does not coincide with the target object and does not meet the obstacle object, the process returns to step S520.
  • the first action attribute may include a first sub-action attribute and a second sub-action attribute
  • the first state parameter of the first object A may include a first sub-state parameter and a second sub-state parameter
  • the value of the first sub-state parameter is a horizontal position coordinate of the first object A
  • the value of the second sub-state parameter is a vertical position coordinate of the first object A, which may be Updating a value of the first sub-action attribute to update a horizontal position coordinate of the first object A on the display screen, and updating the first object A according to a value of the second sub-action attribute Displays the vertical position coordinates on the screen.
  • the virtual object includes a first group of objects and a second group of objects
  • the first group of objects are displayed on a display screen when the living body detecting method according to an embodiment of the present disclosure is performed, and the first group of objects includes one or more objects at the start of performing living body detection according to an embodiment of the present disclosure
  • the method of the second set of objects has not been displayed on the display screen and includes one or more objects. Updating display of at least one of the first set of objects on a display screen according to the detected face motion, wherein the at least one of the first set of objects is a controlled object.
  • an initial display position and/or an initial display form of at least a portion of the first set of objects is predetermined or randomly determined.
  • At least one of the second group of objects is displayed according to a display condition of at least one of the first group of objects.
  • at least one of the second set of objects may be displayed in accordance with the detected facial motion.
  • an initial display position and/or an initial display form of at least a portion of the objects of the second set of objects are predetermined or randomly determined.
  • the first state parameter of each object in the first group of objects is a display position of the object
  • the first and second state parameters of each object in the second group of objects are respectively The display position and visual state of the object.
  • At least one of the second set of objects is displayed based on a display of at least one of the first set of objects.
  • the first group of objects includes a first object and a second object, the first object is a controlled object, the second object is a background object, and each of the second group of objects is also Background object.
  • the predetermined condition may be that the controlled objects in the first group of objects sequentially coincide with each of the second object and the second group of objects.
  • the first group of objects includes a first object A and a second object B1
  • the second group of objects includes a third object B2 and a fourth object B3, and the first object A is a controlled object.
  • the second object B1, the third object B2, and the fourth object B3 are all background objects, and the background object is a target object.
  • the face action attribute includes a first action attribute
  • the state parameter of the first object A includes a first state parameter of the first object A
  • the state parameter of the second object B1 includes the second object B1
  • the first state parameter of the third object B2 includes a first state parameter of the third object B2
  • the state parameter of the fourth object B3 includes a first state parameter of the fourth object B3.
  • the value of the second state parameter of the third object B2 in the second group of objects is set to represent a visible value to display The third object B2 in the second group of objects.
  • the value of the first state parameter of the first object A may be updated according to the value of the first action attribute, and the value of the first state parameter of the updated first object A is in the The first object A is displayed on the display screen.
  • the face action attribute may further include a second action attribute different from the first action attribute, and may continue to update the first state parameter of the first object A according to the value of the second action attribute. And displaying the first object A on the display screen according to the updated value of the first state parameter of the first object A.
  • the value of the second state parameter of the fourth object B3 in the second group of objects is set to represent a visible value to display The fourth object B3 in the second group of objects.
  • the value of the first state parameter of the first object A may be updated according to the value of the first or second action attribute, and according to the updated first state parameter of the first object A.
  • the value displays the first object A on the display screen.
  • the face action attribute may further include a third action attribute different from the first and second action attributes, and may continue to update the first of the first object A according to the value of the third action attribute.
  • the living body detection is successful.
  • the first object A sequentially coincides with the second object B1, the third object B2, and the fourth object B3 within a predetermined time, it is determined that the living body detection is successful.
  • step S550 it is determined in step S550 whether the timer exceeds the predetermined timing time, and it is determined whether the first object A is sequentially associated with the second object B1 and the third object B2. It coincides with the fourth object B3.
  • step S550 It is determined in step S550 that the timer exceeds the predetermined timing time and the first object A and the second object B1, the third object B2, and the fourth object B3 are not coincident, or are not associated with the third object B2 and the fourth object. In the case where none of the objects B3 overlaps or does not coincide with the fourth object B3, it is determined in step S570 that the living human face is not detected.
  • step S550 it is determined in step S550 that the timer does not exceed the predetermined timing time and the first object A sequentially coincides with the second object B1, the third object B2, and the fourth object B3, it is determined in step S560 that the living body is detected. human face.
  • step S550 it is determined in step S550 that the timer does not exceed the predetermined timing time and the first object A and the second object B1, the third object B2, and the fourth object B3 are not coincident, or are not related to the third
  • the process returns to step S520.
  • step S520 it is also possible to perform the steps of: determining whether the fourth object is displayed, and determining whether the display is performed if it is determined that the fourth object has not been displayed yet a third object, determining whether the first object coincides with the second object if it is determined that the third object has not been displayed, and displaying that the first object is coincident with the second object
  • the third object then returns to step S520; if it is determined that the fourth object has not been displayed but the third object is displayed, it is determined whether the first object coincides with the third object, and The fourth object is displayed in a case where it is determined that the first object coincides with the third object, and then returns to step S520.
  • the number of objects included in the second group of objects may be set, and the first object A is sequentially coincident with each of the second object B1 and the second group of objects. In the case, it is determined that the living body detection is successful.
  • At least one of the second set of objects is displayed according to a display condition of at least one of the first set of objects, and at least some of the second set of objects are controlled objects.
  • the first group of objects includes a first object and a second object
  • the first object is a controlled object
  • the second object is a background object
  • each of the second group of objects is also The object being controlled.
  • the predetermined condition may be that each of the first object and the second group of objects sequentially coincides with the second object.
  • the first group of objects includes a first object A1 and a second object B
  • the second group of objects includes a third object A2 and a fourth object A3, the first object A1, the first The three objects A2 and the fourth object A3 are controlled objects
  • the second object B is a background object.
  • the face action attribute includes a first action attribute, and the state object package of the first object A1
  • the first state parameter of the first object A1, the state parameter of the second object B includes a first state parameter of the second object B
  • the state parameter of the third object A2 includes the third object A first state parameter of A2, the state parameter of the fourth object A3 comprising a first state parameter of the fourth object A3.
  • the value of the second state parameter of the third object A2 in the second group of objects is set to represent a visible value to display The third object A2 in the second group of objects.
  • the value of the first state parameter of the third object A2 may be updated according to the value of the first action attribute, and the value of the first state parameter of the updated third object A2 is
  • the third object A2 is displayed on the display screen, and the display position of the first object A1 remains unchanged.
  • the face action attribute may further include a second action attribute different from the first action attribute, and may continue to update the first state parameter of the third object A2 according to the value of the second action attribute. And displaying the third object A2 on the display screen according to the updated value of the first state parameter of the third object A2.
  • the value of the second state parameter of the fourth object A3 in the second group of objects is set to represent a visible value to display The fourth object A3 in the second group of objects.
  • the value of the first state parameter of the fourth object A3 may be updated according to the value of the first or second action attribute, and according to the updated first state parameter of the fourth object A3.
  • the value displays the fourth object A3 on the display screen, while the display positions of the first and second objects A1 and A2 remain unchanged.
  • the face action attribute may further include a third action attribute different from the first and second action attributes, and may continue to update the first of the fourth object A3 according to the value of the third action attribute.
  • the value of the state parameter, and the fourth object A3 is displayed on the display screen according to the updated value of the first state parameter of the fourth object A3.
  • the living body detection is successful.
  • the living body detection is successful.
  • step S550 it is judged in step S550 whether the timer exceeds the predetermined timing time, and the first object A1, the third object A2, and the fourth are judged. Whether the object A3 coincides with the second object B in order.
  • step S550 Determining, in step S550, that the timer exceeds the predetermined timing time and the first object A1 does not coincide with the second object B, or the third object A2 does not coincide with the second object B, or In the case where the fourth object A3 does not coincide with the second object B, it is determined in step S570 that the living human face is not detected.
  • Step S560 determines that a living human face is detected.
  • step S550 it is determined in step S550 that the timer does not exceed the predetermined timing time and the first object A1 does not coincide with the second object B, or the third object A2 does not coincide with the second object B. If the fourth object A3 does not overlap the second object B, the process returns to step S520.
  • step S520 it is also possible to perform the steps of: determining whether the fourth object is displayed, and determining whether the display is performed if it is determined that the fourth object has not been displayed yet a third object, determining whether the first object coincides with the second object if it is determined that the third object has not been displayed, and displaying that the first object is coincident with the second object
  • the third object then returns to step S520; if it is determined that the fourth object has not been displayed but the third object is displayed, it is determined whether the third object coincides with the second object, and The fourth object is displayed in a case where it is determined that the third object coincides with the second object, and then returns to step S520.
  • the number of objects included in the second group of objects may be set, and each of the first object A1 and the second group of objects sequentially coincides with the second object B In the case, it is determined that the living body detection is successful.
  • At least one of the second set of objects is displayed according to a display condition of at least one of the first set of objects, and at least a part of the second set of objects is a controlled object.
  • the first group of objects includes a first object A1 and a second object.
  • B1 the first object A1 is a controlled object
  • the second object B1 is a background object
  • the second group of objects includes a third object A2 and a fourth object B2, and a fifth object A3 and a sixth object B3
  • the third object A2 and the fifth object A3 are both controlled objects
  • the fourth object B2 and the sixth object B3 are both background objects.
  • the predetermined condition may be that the second object B1 overlaps with the first object A1, the fourth object B2 and the third object A1, and the sixth object B3 and the fifth object A3 .
  • the face action attribute includes a first action attribute. First, updating the value of the first state parameter of the first object A1 according to the value of the first action attribute, and according to the updated value of the first state parameter of the first object A1 on the display screen The first object A1 is displayed.
  • the third object A2 and the fourth object B2 of the second group of objects are displayed.
  • the value of the first state parameter of the third object A2 may be updated according to the value of the first action attribute, and the value of the first state parameter of the updated third object A2 is
  • the third object A2 is displayed on the display screen.
  • the face action attribute may further include a second action attribute different from the first action attribute, and may continue to update the first state parameter of the third object A2 according to the value of the second action attribute. And displaying the third object A2 on the display screen according to the updated value of the first state parameter of the third object A2.
  • the fifth object A3 of the second group of objects is displayed.
  • the value of the first state parameter of the fifth object A3 may be updated according to the value of the first or second action attribute, and according to the updated first state parameter of the fifth object A3.
  • the value displays the fifth object A3 on the display screen.
  • the face action attribute may further include a third action attribute different from the first and second action attributes, and may continue to update the first of the fifth object A3 according to the value of the third action attribute.
  • the living body detection is successful.
  • the first object A1, the third object A2, and the fifth object A3 are sequentially coincident with the second object B1, the fourth object B2, and the sixth object B3 within a predetermined time To confirm the success of the living body test.
  • the timer is judged in step S550. Whether the predetermined timing time is exceeded, and whether the first object A1, the third object A2, and the fifth object A3 are sequentially coincident with the second object B1, the fourth object B2, and the sixth object B3.
  • step S550 Determining in step S550 that the timer exceeds the predetermined timing time and the fifth object A3 does not coincide with the sixth object B3, or the third object A2 does not coincide with the fourth object B2, or the first object In the case where A1 does not coincide with the second object B1, it is determined in step S570 that the living face is not detected.
  • step S550 Determining in step S550 that the timer does not exceed the predetermined timing time and the first object A1, the third object A2, and the fifth object A3 are sequentially associated with the second object B1, the fourth object B2, and In the case where the sixth object B3 coincides, it is determined in step S560 that the living human face is detected.
  • step S550 it is determined in step S550 that the timer does not exceed the predetermined timing time and the fifth object A3 does not coincide with the sixth object B3, or the third object A2 does not coincide with the fourth object B2, or If the first object A1 does not overlap with the second object B1, the process returns to step S520.
  • step S520 it is also possible to perform the steps of: determining whether the fifth and sixth objects are displayed, in the case where it is determined that the fifth and sixth objects have not been displayed yet Determining whether the third and fourth objects are displayed, determining whether the first object coincides with the second object if it is determined that the third and fourth objects have not been displayed, and determining the first Displaying the third and fourth objects in the case where the object coincides with the second object, and then returning to step S520; determining that the fifth and sixth objects have not been displayed but displaying the third and fourth Determining whether the third object coincides with the fourth object in the case of an object, and displaying the fifth and sixth objects in the case of determining whether the third object coincides with the fourth object, and then The process returns to step S520.
  • the number of object pairs included in the second group of objects may be set, wherein the object A2 and the object B2 may be regarded as one object pair, and the each object Ai is sequentially coincident with its corresponding object Bi.
  • the living body detection is successful.
  • each of the objects Ai sequentially coincides with the corresponding object Bi in a predetermined time, it is determined that the living body detection is successful.
  • the first group of objects includes a first object A1 and a second object B, the first object A is a controlled object, and the second object B is a background object, where the The two sets of objects include a third object A2, which is the target object B of the first object A1 and the third object A2.
  • the predetermined condition may be that the third object A2 coincides with the second object B, or the first and third objects A1 and A2 are sequentially coincident with the second object.
  • the value of the state parameter of at least one of the first object A1 and the target object B may be randomly determined.
  • the display position of the first object A1 is randomly determined, and/or the display position of the target object B is randomly determined.
  • the face action attribute includes a first action attribute and a second action attribute, and the display position coordinates of the first object are updated according to the value of the first action attribute, and the number is updated according to the value of the second action attribute.
  • the visual state value of the two objects for example, a visual state value of 0 indicates invisibility, that is, the second object is not displayed; a visual state value of 1 indicates visual, that is, the second object is displayed.
  • the preset condition may be that the display position of the third object A2 coincides with the display position of the second object B.
  • the preset condition may be that the display positions of the first object A1 and the third object A2 coincide with the display position of the target object B.
  • the first object A1 is initially displayed and the third object A2 is not displayed, the display position of the first object A1 is changed according to the first action attribute, and the first change is performed according to the second action attribute.
  • the display position of the third object A2 is the same as the display position of the first object A1 when the second action attribute value is changed, and the display position of the third object A2 and the target object B are In the case where the display positions coincide, it is determined that the living body detection is successful.
  • the living body detection it is determined that the living body detection is successful only in the following scenario, that is, changing the display position of the first object A1 according to the first action attribute, and the first object is Moving A1 to the target object B, and then detecting a change of the second action attribute when the first object A1 is located at the target object B, and displaying the same at the target object B accordingly
  • the first object A1 is a sight
  • the second object B is a bull's-eye
  • the third object A2 is a bullet.
  • step S550 it is determined in step S550 whether the timer exceeds the predetermined timing time, and it is determined whether the third object A2 is related to the second pair. Like B coincides.
  • step S550 If it is determined in step S550 that the timer exceeds the predetermined timing time and the third object A2 has not been displayed, or the third object A2 has been displayed but does not coincide with the second object B, it is determined in step S570 that there is no A living face is detected.
  • step S550 it is determined in step S550 that the timer does not exceed the predetermined timing time and the third object A2 coincides with the second object B, it is determined in step S560 that a living human face is detected.
  • step S550 determines that the timer has not exceeded the predetermined timing time and the third object A2 has not been displayed.
  • At least one of the second set of objects is displayed according to the detected facial motion, and at least some of the second set of objects are controlled objects.
  • the first group of objects includes a first object A1 and a second object B1, the first object A1 is a controlled object, the second object B1 is a background object, and the second group of objects
  • the third object A2 and the fourth object B2 are included, the third object A2 is a controlled object, and the fourth object B2 is a background object.
  • the predetermined condition may be that the first object A1 and the second object B1 coincide and the third object A2 and the fourth object B2 coincide.
  • the value of the state parameter of at least one of the first object A1, the second object B1, the third object A2, and the fourth object B2 may be randomly determined. For example, the display positions of the first object A1, the second object B1, the third object A2, and the fourth object B2 are randomly determined.
  • the face action attribute includes a first action attribute and a second action attribute. Updating the display position coordinates of the first object A1 according to the value of the first action attribute, and updating the visual state values of the third and fourth objects according to the value of the second action attribute, for example, a visible state A value of 0 indicates invisibility, that is, the third and fourth objects are not displayed; a visual state value of 1 indicates that the third and fourth objects are displayed.
  • the display position coordinates of the third object may also be updated according to the value of the first action attribute.
  • the face action attribute further includes a third action attribute different from the first action attribute, and the display position coordinate of the third object is updated according to the value of the third action attribute.
  • the first object A1 and the second object B1 are initially displayed but the third object A2 and the fourth object B2 are not displayed, and the display position of the first object A1 is changed according to the first action attribute, according to The second action attribute changes a visual state of the second object.
  • the display position of the first object A1 determines the initial display position of the third object A2 when the second action attribute value is changed, or the initial display position of the third object A2 may be randomly determined.
  • the living body detection is successful only in the following scenario, that is, changing the display position of the first object A1 according to the first action attribute, and moving the first object A1 to the second object At B1, the change of the second action attribute is then detected when the first object A1 is located at the second object B, and is determined according to the random position or according to the display position of the second object B1. Displaying the third object A2 at the display position, and randomly displaying the fourth object B, and then changing the third object A3 according to the first action attribute or a third action attribute different from the first action attribute The display position until the third object A2 is moved to the fourth object B2.
  • the first action attribute may include a first sub-action attribute and a second sub-action attribute
  • the first state parameter of the first object A1 may include a first sub-state parameter and a second sub-state parameter.
  • the value of the first sub-state parameter of the first object A1 and the value of the second sub-state parameter are respectively a horizontal position coordinate and a vertical position coordinate of the first object A, which may be according to the first sub-
  • the value of the action attribute and the value of the second child action attribute respectively update the horizontal position coordinate and the vertical position coordinate of the first object A on the display screen.
  • the third action attribute may also include a third sub-action attribute and a fourth sub-action attribute
  • the first state parameter of the second object A2 may include a first sub-state parameter and a second sub-state parameter
  • the value of the first sub-state parameter of the second object A2 and the value of the second sub-state parameter are the horizontal position coordinate and the vertical position coordinate of the second object A2, respectively, and may be based on the value and the position of the third sub-action attribute.
  • the value of the fourth sub-action attribute is used to update the horizontal position coordinate and the vertical position coordinate of the second object A2 on the display screen, respectively.
  • the first sub-action attribute and the second sub-action attribute may be defined as a degree of face deflection and a degree of face pitch, respectively, or the third sub-action attribute and the fourth sub-action attribute may be respectively defined as an eye.
  • the degree of rotation and the degree of rotation of the eyes up and down.
  • the virtual object includes a first group of objects and a second group of objects, and the first group of objects are displayed on a display screen when starting to perform a living body detecting method according to an embodiment of the present disclosure
  • the first group of objects includes one or more objects, the second group of objects not yet displayed on the display screen and including one when starting to perform the living body detection method according to an embodiment of the present disclosure Or multiple objects.
  • the initial display position and/or initial display form of at least a portion of the first set of objects is predetermined or randomly determined.
  • At least one of the second group of objects is displayed according to a display condition of at least one of the first group of objects.
  • at least one of the second set of objects may be displayed in accordance with the detected facial motion.
  • an initial display position and/or an initial display form of at least a portion of the objects of the second set of objects are predetermined or randomly determined.
  • the first state parameter of each object in the first group of objects is a display position of the object
  • the first and second state parameters of each object in the second group of objects are respectively The display position and visual state of the object.
  • the first group of objects includes a first object and a second object
  • the second group of objects includes a plurality of objects
  • the first object is a controlled object
  • the second object and the The second set of objects are background objects
  • the background objects are obstacle objects
  • the initial display position and/or initial display form of the first object and the obstacle object are random.
  • the obstacle object moves
  • its motion trajectory may be a straight line or a curve
  • the obstacle object may move in a vertical direction, move in a horizontal direction, or move in any direction.
  • the motion trajectory and the motion direction of the obstacle object are also random.
  • the face action attribute includes a first action attribute
  • the state parameter of the first object includes a first state parameter of the first object
  • the first state parameter of the first object is a display of the first object Positioning, updating a value of the first state parameter of the first object according to the value of the first action attribute, and displaying the value on the display screen according to the updated value of the first state parameter of the first object Said the first object.
  • the predetermined condition may be that the first object does not meet the obstacle object, or the distance between the display position of the first object and the display position of the second object exceeds a predetermined distance, the predetermined The distance may be determined according to a display size of the first object and a display size of the second object.
  • the predetermined condition may be that the first object does not meet the obstacle object within a predetermined time, the first object does not meet a predetermined number of obstacle objects, or the first time An object does not meet a predetermined number of obstacle objects.
  • displaying according to the display condition of at least one of the first group of objects At least one of the second set of objects.
  • the objects in the second group of objects are uncontrolled objects, that is, background objects, and the background objects are obstacle objects.
  • FIG. 10A An example of the position of the first object A and the obstacle object D is shown in FIG. 10A.
  • the obstacle object D may continuously move on the display screen, and the moving direction of the obstacle object D may be random.
  • the living body detection is successful.
  • the living body detection is successful.
  • the living body detection is successful.
  • the living body detection is successful.
  • the first group of objects further includes a third object, the second object and the third object constitute a background object, and the third object is a target object.
  • the predetermined condition may be that the first object does not meet the obstacle object and the first object coincides with the third object within a predetermined timing time.
  • the first object A, the second object (obstacle) D, and the third object (target object) B of the first group of objects, and the obstacle objects D1 and D2 of the second group of objects are shown in FIG. 10B.
  • the obstacle object may continuously move on the display screen, and the moving direction of the obstacle object D may be random, where the first object A and the obstacle object do not meet and the first object A and the object In the case where the target objects B overlap, it is determined that the living body detection is successful.
  • the first object A does not meet the obstacle object and the display position of the first object A coincides with the display position of the target object B within a predetermined timing time it is determined that the living body detection is successful.
  • the predetermined condition is that the first object A does not meet a predetermined number of obstacle objects
  • step S550 Obstacle object, and returning to step S520; and determining in step S550 that the first object A and the currently displayed obstacle object are not If the encounter and the currently displayed obstacle object are still displayed on the display screen, the flow returns to step S520.
  • step S570 it is determined in step S570 that the living human face is not detected. If it is determined in step S550 that the first object A does not meet the currently displayed obstacle object, the currently displayed obstacle object moves out of the display screen, and the number of obstacle objects that have been displayed reaches a predetermined number, it is determined in step S560 that the living person is detected. face.
  • At least one of the second set of objects is displayed based on a display of at least one of the first set of objects.
  • the other at least one of the second group of objects is also displayed according to the display condition of the at least one object of the second group of objects.
  • the objects in the second group of objects are uncontrolled objects, that is, background objects, and the background objects are obstacle objects.
  • the first group of objects includes a first object and a second object
  • the display of the first object and the second object on the display screen is updated according to the detected face motion.
  • the vertical display position of the first object is fixed, and the horizontal display position of the first object and the horizontal and vertical display positions of the second object are updated according to the detected face motion.
  • the obstacle object in the second group of objects is also displayed according to the display situation of the second object, and the second group of objects may also be displayed according to the display situation of the obstacle object in the second group of objects.
  • New obstacle object Specifically, the horizontal display position of the first object and the horizontal and vertical display positions of the obstacle object in the second group of objects are updated according to the detected face motion.
  • the face action attribute may include a first action attribute and a second action attribute
  • the state parameter of the first object includes first and second state variables of the first object
  • the second state parameter is a travel parameter and a horizontal position of the first object, respectively
  • the travel parameter may be a motion speed, a travel distance, or the like.
  • the travel parameter is a motion speed
  • first a value of a motion speed of the first object is updated according to a value of the first motion attribute
  • a first object is updated according to a value of the second motion attribute.
  • Horizontal position coordinates for example, in a case where the travel parameter is a motion speed, first, a value of a motion speed of the first object is updated according to a value of the first motion attribute, and a first object is updated according to a value of the second motion attribute.
  • the distance between the first object A and the obstacle object D (which may include a horizontal distance and a vertical distance), and the level of the first object A
  • the position coordinates determine the display position of the obstacle object D and the first object A. For example, in a case where the target advancing direction of the first object is the road extending direction (the direction in which the road is narrowed as shown in FIG.
  • the first object A a value of the moving speed and a vertical distance between the first object A and the obstacle object D, determining whether to continue displaying the obstacle object D and the display position of the obstacle object D, and according to the first
  • the horizontal position coordinates of the object A determine the display position of the first object A.
  • the first object A may be a car
  • the obstacle object D may be a stone randomly generated on a road ahead of the car
  • the first action attribute may be a degree of face pitch
  • the second action The attribute may be a degree of facial deflection
  • the first state parameter and the second state parameter of the first object A may be the motion speed and the horizontal position of the first object, respectively.
  • the face normal state may correspond to the motion speed V0
  • the face 30 degree or 45 degree bottom view state corresponds to the highest motion speed VH
  • the face 30 degree or 45 degree top view state corresponds to the lowest motion speed VL, according to the person
  • the value of the degree of face pitch determines the speed of movement of the first object.
  • the face front view state may correspond to the intermediate position P0
  • the face 30 degree or 45 degree left bias state may correspond to the left edge position PL
  • the face 30 degree or 45 degree right deviation state corresponds to the right edge position.
  • PR determines the horizontal position coordinate of the first object according to the value of the degree of face deflection (for example, the face deflection angle).
  • the state parameter of the first object further comprises a third state parameter of the first object, and the third state parameter may be a travel distance of the first object.
  • the third state parameter may be a travel distance of the first object.
  • the living body detecting device may be an electronic device integrated with a face image capturing device, such as a smart phone, a tablet computer, a personal computer, a face recognition based identification device, or the like.
  • the living body detecting apparatus may further include a separate face image collecting device and a detecting processing device, the detecting processing device may receive the captured image from the face image collecting device, and perform living body according to the received captured image Detection.
  • the detection processing device may be a server, a smart phone, a tablet computer, a personal computer, a face recognition based identification device, or the like.
  • the living body detecting apparatus Since the details of the various operations performed by the living body detecting apparatus are substantially the same as those of the living body detecting method described above with respect to FIGS. 2-4, in order to avoid repetition, only the living body detecting apparatus will be briefly described below, and the description will be omitted. A description of the same details.
  • the living body detecting apparatus 1100 includes a face motion detecting device 1110, a virtual object control device 1120, and a living body determining device 1130.
  • the face motion detecting device 1110, the virtual object control device 1120, and the living body determining device 1130 can be realized by the processor 102 shown in FIG. 1.
  • the living body detecting apparatus 1200 includes an image capturing device 1240, a face motion detecting device 1110, a virtual object control device 1120, a living body determining device 1130, a display device 1250, and a storage device 1260.
  • the image capturing device 1240 can be implemented by the image capturing device 110 shown in FIG. 1
  • the face motion detecting device 1110 , the virtual object control device 1120 , and the living body determining device 1130 can be implemented by the processor 102 shown in FIG. 1
  • the display device 1250 This may be implemented by the output device 108 shown in FIG. 1, which may be implemented by the storage device 104 shown in FIG.
  • the image capturing device 1240 in the living body detecting device 1200 or other image capturing device that can transmit an image to the living body detecting device 1100 or 1200 independently of the living body detecting device 1100 or 1200 can be used to acquire the gradation of the predetermined shooting range or
  • the color image is a captured image, and the captured image may be a photo or a frame in the video.
  • the image capture device may be a camera of a smart phone, a camera of a tablet, a camera of a personal computer, or even a webcam.
  • the face motion detecting device 1110 is configured to detect a face motion from the captured image.
  • the face motion detecting device 1110 may include a key point positioning device 1310, a texture information extracting device 1320, and an action attribute determining device 1330.
  • the keypoint locating device 1310 is configured to locate a human key point in the captured image. As an example, the key point locating device 1310 may first determine whether a captured face is included in the acquired image, and locate a face key point in the case where the face is detected. The details of the operation of the key point locating device 1310 are the same as those described in step S310, and details are not described herein again.
  • the texture information extracting means 1320 is configured to extract image texture information from the captured image.
  • the texture information extracting device 1320 may extract fine information of a face, such as eyeball position information, mouth shape information, micro-expression information, and the like, according to pixel information in the captured image, such as brightness information of a pixel.
  • the action attribute determining means 1330 obtains a value of the face action attribute based on the located face key point and/or the image texture information.
  • the facial action attribute obtained based on the located face key points may include, for example, but not limited to, degree of eye closure, degree of mouth opening, and face depression. The degree of tilt, the degree of face deflection, the distance between the face and the camera, and so on.
  • the facial motion attribute obtained based on the image texture information may include, but is not limited to, a degree of left and right eye deflection, an eyeball vertical deflection degree, and the like.
  • the details of the operation of the action attribute determining means 1330 are the same as those described in the step S330, and details are not described herein again.
  • the virtual object control device 1120 is configured to display a virtual object on the display device 1250 according to the detected face motion control.
  • the state of the virtual object displayed on the display screen may be changed according to the detected face motion control.
  • the virtual object may include a first set of objects that have been displayed on the display screen in an initial state and may include one or more objects.
  • the display of at least one of the first set of objects on the display screen is updated in accordance with the detected face motion.
  • the initial display position and/or initial display form of at least a portion of the first set of objects is predetermined or randomly determined. Specifically, for example, the motion state, display position, size, shape, color, and the like of the virtual object can be changed.
  • a new virtual object may be displayed on the display screen according to the detected face motion control.
  • the virtual object may further include a second group of objects that are not yet displayed on the display screen and may include one or more objects in an initial state.
  • at least one of the second set of objects is displayed in accordance with the detected face motion.
  • An initial display position and/or an initial display form of at least a portion of the at least one object of the second set of objects is predetermined or randomly determined.
  • the virtual object control device 1120 may include a face action mapping device 1410 and a virtual object presenting device 1420.
  • the face motion mapping device 1410 updates the value of the state parameter of the virtual object based on the value of the face action attribute.
  • a face action attribute can be mapped to a certain state parameter of the virtual object.
  • the user's eye degree of closure or degree of mouth opening may be mapped to the size of the virtual object, and the size of the virtual object may be updated according to the value of the user's degree of eye closure or degree of mouth opening.
  • the user's face pitch degree may be mapped to a vertical display position of the virtual object on the display screen, and the vertical display position of the virtual object on the display screen may be updated according to the value of the user's face pitch degree.
  • the mapping relationship between the face action attribute and the state parameter of the virtual object may be preset.
  • the face action attribute may include at least one action attribute
  • the virtual object's The state parameter includes at least one state parameter
  • the virtual object can include at least one virtual object.
  • a motion attribute may correspond to only one state parameter, or a motion attribute may correspond to a plurality of state parameters in chronological order.
  • the virtual object presentity device 1420 presents the virtual object according to the updated value of the state parameter of the virtual object.
  • the virtual object rendering device 1420 can update the display of at least one of the first set of objects.
  • the virtual object rendering device 1420 can also display a new virtual object, ie a virtual object in the second set of objects.
  • the virtual object rendering device 1420 can also update the display of at least one of the second set of objects.
  • the living body judging device 1130 is configured to determine whether the virtual object satisfies a predetermined condition, and in a case where it is determined that the virtual object satisfies a predetermined condition, determine that a face in the captured image is a living human face.
  • the predetermined condition is a condition related to a form and/or motion of the virtual object, wherein the predetermined condition is predetermined or randomly generated.
  • the form of the virtual object may include a size, a shape, a color, etc.; and whether the motion-related parameter of the virtual object is determined Satisfying the conditions related to the motion, for example, the motion-related parameters of the virtual object may include a position, a motion trajectory, a motion speed, a motion direction, etc.
  • the motion-related condition may include a predetermined display position of the virtual object a predetermined motion trajectory of the virtual object, a display position of the virtual object, a predetermined display position to avoid, and the like.
  • Whether the virtual object completes a predetermined task may be determined according to an actual motion trajectory of the virtual object, and the predetermined task may include, for example, moving according to a predetermined motion trajectory, bypassing an obstacle movement, or the like.
  • the predetermined condition may be set as: the first object reaches a target display position, the first object reaches a target display size, the first object A target shape is reached, and/or the first object reaches a target display color, and the like.
  • the first group of objects further includes a second object, and an initial display position and/or an initial display form of at least one of the first object and the second object are predetermined or randomly determined.
  • the first object may be a controlled object
  • the second object may be a background object
  • the second object may be a target object of the first object
  • the predetermined condition may be It is set that the first object overlaps with the target object.
  • the background object may be a target motion trajectory of the first object, and the target motion trajectory may be random
  • the predetermined condition may be set such that an actual motion trajectory of the first object matches the target motion trajectory.
  • the background object may be an obstacle object
  • the obstacle object may be randomly displayed, and its display position and display time are random
  • the predetermined condition may be set as follows: the first object is not The obstacle objects meet, ie the first object bypasses the obstacle object.
  • the predetermined condition may also be set as: the first and/or Or the third object reaches a corresponding target display position, the first and/or third object reaches a corresponding target display size, the first and/or third object reaches a corresponding target shape, and/or the The first and/or third object reaches the corresponding target display color and the like.
  • the predetermined condition may be set as: the first object reaches a target display position, the first object reaches a target display size, Determining that the first object reaches the target shape, and/or the virtual object reaches the target display color, and the like, and the second object reaches the target display position, the second object reaches the target display size, and the second object reaches the target The shape, and/or the second object reaches the target display color, and the like.
  • the face action mapping device 1410 and the virtual object presentation device 1420 can perform various operations in the first to fifth embodiments described above, and details are not described herein again.
  • the living body detecting apparatuses 1100 and 1200 may further include a timer for timing a predetermined timing time.
  • the timer can also be implemented by the processor 102.
  • the timer may be initialized according to user input, or the timer may be automatically initialized when a face is detected in the captured image, or may be automatically initialized when a predetermined action of the face is detected in the captured image.
  • the living body judging means 1130 is configured to determine whether the virtual object satisfies a predetermined condition within the predetermined timing time, and in a case where it is judged that the virtual object satisfies a predetermined condition within the predetermined timing time Next, it is determined that the face in the captured image is a living face.
  • the storage device 1260 is configured to store the captured image. In addition, the storage device 1260 is further configured to store a state parameter and a state parameter value of the virtual object. In addition, the storage device 1260 is further configured to store the virtual object presented by the virtual object presentation device 1420 and store a background image or the like to be displayed on the display device 1250.
  • the storage device 1260 can store computer program instructions that, when executed by the processor 102, can implement a living body detection method in accordance with an embodiment of the present disclosure, And/or a key point locating device 1310, a texture information extracting device 1320, and an action attribute determining device 1330 in the living body detecting apparatus according to an embodiment of the present disclosure may be implemented.
  • a computer program product comprising a computer readable storage medium on which computer program instructions are stored.
  • the computer program instructions may implement a living body detecting method according to an embodiment of the present disclosure while being operated by a computer, and/or may implement a key point positioning device, a texture information extracting device, and an action in the living body detecting device according to an embodiment of the present disclosure.
  • the attribute determines all or part of the functionality of the device.
  • the living body detecting method and apparatus and the computer program product of the embodiments of the present disclosure by controlling the virtual object display based on the face motion and performing the living body detection according to the virtual object display, the photo and video can be effectively prevented without depending on the special hardware device. Attacks in various ways, such as 3D face models or masks, can reduce the cost of living body detection. Further, by identifying a plurality of action attributes in the face action, a plurality of state variables of the virtual object can be controlled, and the virtual object can be caused to change the display state in multiple aspects, for example, causing the virtual object to perform a complex predetermined action. Or causing the virtual object to achieve a display effect that is greatly different from the initial display effect. Therefore, the accuracy of the living body detection can be further improved, and further, the safety of applying the living body detecting method and apparatus according to the embodiment of the present invention and the application scenario of the computer program product can be improved.
  • the computer readable storage medium can be any combination of one or more computer readable storage media.
  • the computer readable storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet, a hard disk of a personal computer, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory. (EPROM), Portable Compact Disk Read Only Memory (CD-ROM), USB memory, or any combination of the above storage media.

Abstract

一种活体检测方法及设备、以及计算机程序产品,属于人脸识别技术领域。所述活体检测方法,包括:从拍摄图像中检测人脸动作;根据所检测的人脸动作控制在显示屏幕上显示虚拟对象;以及在所述虚拟对象满足预定条件的情况下,确定所述拍摄图像中的人脸为活体人脸。通过基于人脸动作控制虚拟对象显示并根据虚拟对象显示进行活体检测,可以有效地防范照片、视频、3D人脸模型或者面具等多种方式的攻击。

Description

活体检测方法及设备、计算机程序产品 技术领域
本公开涉及人脸识别技术领域,更具体地涉及一种活体检测方法及设备、以及计算机程序产品。
背景技术
当前,人脸识别系统越来越多地应用于安防、金融、社保领域需要身份验证的线上场景,如线上银行开户、线上交易操作验证、无人值守的门禁系统、线上社保办理、线上医保办理等。在这些高安全级别的应用领域,除了确保被验证者的人脸相似度符合数据库中存储的底库,首先需要验证被验证者是一个合法的生物活体。也就是说,人脸识别系统需要能够防范攻击者使用照片、视频、3D人脸模型、或者面具等方式进行攻击。
目前市场上的技术产品中还没有公认成熟的活体验证方案,已有的技术要么依赖特殊的硬件设备(诸如,红外相机、深度相机),或者只能防范简单的静态照片攻击。
因此,需要既不依赖于特殊的硬件设备又能够有效地防范照片、视频、3D人脸模型或者面具等多种方式的攻击的人脸识别方式。
发明内容
鉴于上述问题而提出了本发明。本公开实施例提供了一种活体检测方法及设备、以及计算机程序产品,其能够基于人脸动作控制虚拟对象显示,在虚拟对象显示满足预定条件的情况下确定活体检测成功。
根据本公开实施例的一个方面,提供了一种活体检测方法,包括:从拍摄图像中检测人脸动作;根据所检测的人脸动作控制在显示屏幕上显示虚拟对象;以及在所述虚拟对象满足预定条件的情况下,确定所述拍摄图像中的人脸为活体人脸。
根据本公开实施例的另一方面,提供了一种活体检测设备,包括:人脸动作检测装置,被配置为从拍摄图像中检测人脸动作;虚拟对象控制装置,被配置为根据所检测的人脸动作控制在显示装置上显示虚拟对象;以及活体判断装置,被配置为在所述虚拟对象满足预定条件的情况下确定所述拍摄图 像中的人脸为活体人脸。
根据本公开实施例的又一方面,提供了一种活体检测设备,包括:一个或多个处理器;一个或多个存储器;存储在所述存储器中的计算机程序指令,在所述计算机程序指令被所述处理器运行时执行以下步骤:从拍摄图像中检测人脸动作;根据所检测的人脸动作控制在显示装置上显示虚拟对象;以及在所述虚拟对象满足预定条件的情况下,确定所述拍摄图像中的人脸为活体人脸。
根据本公开实施例的再一方面,提供了一种计算机程序产品,包括一个或多个计算机可读存储介质,所述计算机可读存储介质上存储了计算机程序指令,所述计算机程序指令在被计算机运行时执行以下步骤:从拍摄图像中检测人脸动作;根据所检测的人脸动作控制在显示装置上显示虚拟对象;以及在所述虚拟对象满足预定条件的情况下,确定所述拍摄图像中的人脸为活体人脸。
根据本公开实施例的活体检测方法及设备、以及计算机程序产品,通过基于人脸动作控制虚拟对象显示并根据虚拟对象显示进行活体检测,可以不依赖于特殊的硬件设备来有效地防范照片、视频、3D人脸模型或者面具等多种方式的攻击,从而可以降低活体检测的成本。更进一步,通过识别人脸动作中的多个动作属性,可以控制虚拟对象的多个状态参量,可以使得所述虚拟对象在多个方面改变显示状态,例如使得所述虚拟对象执行复杂的预定动作、或者使得所述虚拟对象实现与初始显示效果有很大不同的显示效果。因此,可以进一步提高活体检测的准确度,并且进而可以提高应用根据本发明实施例的活体检测方法及设备、以及计算机程序产品的应用场景的安全性。
附图说明
通过结合附图对本公开实施例进行更详细的描述,本公开的上述以及其它目的、特征和优势将变得更加明显。附图用来提供对本公开实施例的进一步理解,并且构成说明书的一部分,与本公开实施例一起用于解释本公开,并不构成对本公开的限制。在附图中,相同的参考标号通常代表相同部件或步骤。
图1是用于实现本公开实施例的活体检测方法和设备的电子设备的示意性框图;
图2是根据本公开实施例的活体检测方法的示意性流程图;
图3是根据本公开实施例的活体检测方法中的人脸动作检测步骤的示意性流程图;
图4是根据本公开实施例的活体检测方法中的虚拟对象显示控制步骤的示意性流程图;
图5是根据本公开实施例的活体检测方法的另一示意性流程图;
图6A-6D和图7A-7B是根据本公开第一实施例的在显示屏幕上显示的虚拟对象的示例;
图8A和图8B是根据本公开第二实施例的在显示屏幕上显示的虚拟对象的示例;
图9A-9E是根据本公开第三实施例的在显示屏幕上显示的虚拟对象的示例;
图10A-10C是根据本公开第四实施例的在显示屏幕上显示的虚拟对象的示例;
图11是根据本公开实施例的活体检测设备的示意性框图;
图12是根据本公开实施例的另一活体检测设备的示意性框图;
图13是根据本公开实施例的活体检测设备中的人脸动作检测装置的示意性框图;以及
图14是根据本公开实施例的活体检测设备中的虚拟对象控制装置的示意性框图。
具体实施方式
为了使得本公开的目的、技术方案和优点更为明显,下面将参照附图详细描述根据本公开的示例实施例。显然,所描述的实施例仅仅是本公开的一部分实施例,而不是本公开的全部实施例,应理解,本公开不受这里描述的示例实施例的限制。基于本公开中描述的本公开实施例,本领域技术人员在没有付出创造性劳动的情况下所得到的所有其它实施例都应落入本公开的保护范围之内。
首先,参照图1来描述用于实现本公开实施例的活体检测方法和设备的示例性电子设备100。
如图1所示,电子设备100包括一个或多个处理器102、一个或多个存 储装置104、输出装置108、以及图像采集装置110,这些组件通过总线系统112和/或其它形式的连接机构(未示出)互连。应当注意,图1所示的电子设备100的组件和结构只是示例性的,而非限制性的,根据需要,所述电子设备100也可以具有其他组件和结构。
所述处理器102可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制所述电子设备100中的其它组件以执行期望的功能。
所述存储装置104可以包括一个或多个计算机程序产品,所述计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在所述计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器102可以运行所述程序指令,以实现下文所述的本发明实施例中(由处理器实现)的功能以及/或者其它期望的功能。在所述计算机可读存储介质中还可以存储各种应用程序和各种数据,例如所述图像采集装置110采集的图像数据等以及所述应用程序使用和/或产生的各种数据等。
所述输出装置108可以向外部(例如用户)输出各种信息(例如图像或声音),并且可以包括显示器和扬声器等中的一个或多个。
所述图像采集装置110可以拍摄预定取景范围的图像(例如照片、视频等),并且将所拍摄的图像存储在所述存储装置104中以供其它组件使用。
作为示例,用于实现本公开实施例的活体检测方法和设备的示例性电子设备100可以是布置在人脸图像采集端的集成了人脸图像采集装置的电子设备,诸如智能手机、平板电脑、个人计算机、基于人脸识别的身份识别设备等。例如,在安防应用领域,所述电子设备100可以部署在门禁系统的图像采集端,并且可以例如为基于人脸识别的身份识别设备;在金融应用领域,可以部署在个人终端处,诸如智能电话、平板电脑、个人计算机等。
替代地,用于实现本公开实施例的活体检测方法和设备的示例性电子设备100的输出装置108和图像采集装置110可以部署在人脸图像采集端,而所述电子设备100中的处理器102可以部署在服务器端(或云端)。
下面,将参考图2来描述根据本公开实施例的人脸检测方法200。
在步骤S210,从拍摄图像中检测人脸动作。具体地,可以利用如图1所示的用于实现本公开实施例的人脸检测方法的电子设备100中的图像采集装置110或者独立于所述电子设备100的可以向所述电子设备100传送图像的其它图像采集装置,采集预定拍摄范围的灰度或彩色图像作为拍摄图像,所述拍摄图像可以是照片,也可以是视频中的一帧。所述图像采集设备可以是智能电话的摄像头、平板电脑的摄像头、个人计算机的摄像头、或者甚至可以是网络摄像头。
参考图3来描述步骤S210中的人脸动作检测。
在步骤S310,在所述拍摄图像中定位人脸关键点。作为示例,在该步骤中,可以首先确定所获取的图像中是否包含人脸,在检测到人脸的情况下定位出人脸关键点。
人脸关键点是脸部一些表征能力强的关键点,例如眼睛、眼角、眼睛中心、眉毛、颧骨最高点、鼻子、鼻尖、鼻翼、嘴巴、嘴角、以及脸部外轮廓点等。
作为示例,可以预先搜集大量的人脸图像,例如N张人脸图像,例如,N=10000,人工地在每张人脸图像中标注出预定的一系列人脸关键点,所述预定的一系列人脸关键点可以包括但不限于上述人脸关键点中的至少一部分。根据每张人脸图像中各人脸关键点附近的形状特征,基于参数形状模型,利用机器学习算法(如深度学习(Deep Learning),或者基于局部特征的回归算法(local feature-based regression algorithm))进行人脸关键点模型训练,从而得到人脸关键点模型。
具体地,在步骤S310中可以基于已经建立的人脸关键点模型来在拍摄图像中进行人脸检测和人脸关键点定位。例如,可以在拍摄图像中迭代地优化人脸关键点的位置,最后得到各人脸关键点的坐标位置。再例如,可以采用基于级联回归的方法在拍摄图像中定位人脸关键点。
人脸关键点的定位在人脸动作识别中起着重要的作用,然而应了解本公开不受具体采用的人脸关键点定位方法的限制。可以采用已有的人脸检测和人脸关键点定位算法来执行步骤S310中的人脸关键点定位。应了解,本公开实施例的活体检测方法100不限于利用已有的人脸检测和人脸关键点定位算法来进行人脸关键点定位,而且应涵盖利用将来开发的人脸检测和人脸关键点定位算法来进行人脸关键点定位。
在步骤S320,从所述拍摄图像中提取图像纹理信息。作为示例,可以根据所述拍摄图像中的像素信息,例如像素点的亮度信息,提取人脸的精细信息,例如眼球位置信息、口型信息、微表情信息等等。可以采用已有的图像纹理信息提取算法来执行步骤S320中的图像纹理信息提取。应了解,本公开实施例的活体检测方法100不限于利用已有的图像纹理信息提取算法来进行图像纹理信息提取,而且应涵盖利用将来开发的图像纹理信息提取算法来进行图像纹理信息提取。
应了解,步骤S310和S320可以择一执行,或者可以两者均执行。在步骤S310和S320两者均执行的情况下,它们可以同步执行,或者可以先后执行。
在步骤S330,基于所定位的人脸关键点以及/或者所述图像纹理信息,获得人脸动作属性的值。基于所定位的人脸关键点获得的所述人脸动作属性可以例如包括但不限于眼睛睁闭程度、嘴巴张闭程度、人脸俯仰程度、人脸偏转程度、人脸与摄像头的距离等。基于所述图像纹理信息获得的所述人脸动作属性可以包括但不限于眼球左右偏转程度、眼球上下偏转程度等等。
可选地,可以基于当前拍摄图像的前一拍摄图像以及当前拍摄图像,来获得人脸动作属性的值;或者,可以基于首个拍摄图像以及当前拍摄图像,来获得人脸动作属性的值;或者,可以基于当前拍摄图像以及当前拍摄图像的前几个拍摄图像,来获得人脸动作属性的值。
可选地,可以通过几何学习、机器学习、或图像处理的方式来基于所定位的人脸关键点获得人脸动作属性的值。例如,对于眼睛睁闭程度,可以在眼睛一圈定义多个关键点,例如8-20个关键点,例如,左眼的内眼角、外眼角、上眼皮中心点和下眼皮中心点,以及右眼的内眼角、外眼角、上眼皮中心点和下眼皮中心点。然后,通过在拍摄图像上定位这些关键点,确定这些关键点在拍摄图像上的坐标,计算左眼(右眼)的上眼皮中心和下眼皮中心之间的距离作为左眼(右眼)上下眼皮距离,计算左眼(右眼)的内眼角和外眼角之间的距离作为左眼(右眼)内外眼角距离,计算左眼(或右眼)上下眼皮距离与左眼(或右眼)内外眼角距离的比值作为第一距离比值X,根据该第一距离比值来确定眼睛睁闭程度Y。例如,可以设定第一距离比值X的阈值Xmax,并且规定:Y=X/Xmax,由此来确定眼睛睁闭程度Y。Y越大,则表示用户眼睛睁得越大。
返回图2,在步骤S220,根据所检测的人脸动作控制在显示屏幕上显示虚拟对象。
作为示例,可以根据所检测的人脸动作控制改变在显示屏幕上显示的虚拟对象的状态。在此情况下,所述虚拟对象可以包括第一组对象,在初始状态下所述第一组对象已经显示在显示屏幕上并且可以包括一个或多个对象。在该示例中,根据所检测的人脸动作更新所述第一组对象中至少一个对象在显示屏幕上的显示。所述第一组对象中至少一部分对象的初始显示位置和/或初始显示形态是预先确定的或随机确定的。具体地,例如可以改变所述虚拟对象的运动状态、显示位置、尺寸大小、形状、颜色等。
可选地,可以根据所检测的人脸动作控制在显示屏幕上显示新的虚拟对象。在此情况下,所述虚拟对象还可以包括第二组对象,在初始状态下所述第二组对象尚未显示在显示屏幕上并且可以包括一个或多个对象。在该示例中,根据所检测的人脸动作显示所述第二组对象中至少一个对象。所述第二组对象的所述至少一个对象中至少一部分对象的初始显示位置和/或初始显示形态是预先确定的或随机确定的。
参考图4来描述步骤S220的操作。
在步骤S410,根据所述人脸动作属性的值来更新所述虚拟对象的状态参量的值。
具体地,可以将一种人脸动作属性映射为虚拟对象的某一状态参量。例如,可以将用户眼睛睁闭程度或嘴巴张闭程度映射为虚拟对象的尺寸,并且根据用户眼睛睁闭程度或嘴巴张闭程度的值来更新虚拟对象的尺寸大小。再例如,可以将用户人脸俯仰程度映射为虚拟对象在显示屏幕上的垂直显示位置,并且根据用户人脸俯仰程度的值来更新虚拟对象在显示屏幕上的垂直显示位置。
可选地,可以计算当前拍摄图像中的嘴巴张闭程度和之前保存的首个拍摄图像中的嘴巴张闭程度的比值K1,并且将嘴巴张闭程度的比值K1映射为虚拟对象的尺寸S。具体地,可以采用一次函数S=a*K1+b来实现映射。此外,可选地,可以计算当前拍摄图像中人脸位置偏离初始居中位置的程度K2,并且将人脸位置映射为虚拟对象的位置W。具体地,可以采用一次函数W=c*K2+d来实现映射。
例如,所述人脸动作属性可以包括至少一个动作属性,所述虚拟对象的 状态参量包括至少一个状态参量。一个动作属性可以仅与一个状态参量对应,或者一个动作属性可以按照时间顺序依次与多个状态参量对应。
可选地,人脸动作属性与虚拟对象的状态参量之间的映射关系可以是预先设定的,或者可以是在开始执行根据本公开实施例的活体检测方法时随机确定的。根据本公开实施例的活体检测方法还可以包括:将所述人脸动作属性与虚拟对象的状态参量之间的映射关系提示给用户。
在步骤S420,按照更新后的所述虚拟对象的状态参量的值,在所述显示屏幕上显示所述虚拟对象。
如前所述,所述虚拟对象可以包括第一组对象,在根据本公开实施例的活体检测方法开始执行时将所述第一组对象显示在显示屏幕上,可以通过第一组人脸动作属性来更新所述第一组对象中至少一个对象的显示。此外,所述虚拟对象还可以包括第二组对象,在根据本公开实施例的活体检测方法开始执行时所述第二组对象均未在显示屏幕上显示,可以通过与第一组人脸动作属性不同的第二组人脸动作属性来控制是否显示所述第二组对象中至少一个对象;或者可以根据所述第一组对象的显示情况来控制是否显示所述第二组对象中的至少一个对象。
具体地,所述第一组对象中至少一个对象的状态参量可以为显示位置、尺寸大小、形状、颜色、运动状态等,由此可以根据所述第一组人脸动作属性的值改变所述第一组对象中至少一个对象的运动状态、显示位置、尺寸大小、形状、颜色等。
可选地,所述第二组对象中至少一个对象每个的状态参量至少可以包括可视状态,并且还可以包括显示位置、尺寸大小、形状、颜色、运动状态等。可以根据所述第二组人脸动作属性的值或者所述第一组对象中至少一个对象的显示情况来控制是否显示所述第二组对象中至少一个对象,即所述第二组对象中至少一个对象是否处于可视状态,并且还可以根据所述第二组人脸动作属性的值和/或所述第一组人脸动作属性的值改变所述第二组对象中至少一个对象的运动状态、显示位置、尺寸大小、形状、颜色等。
返回图2,在步骤S230,判断所述虚拟对象是否满足预定条件。所述预定条件为与所述虚拟对象的形态和/或运动有关的条件,其中所述预定条件是预先确定的或随机产生的。
具体地,可以判断所述虚拟对象的形态是否满足与形态有关的条件,例 如,所述虚拟对象的形态可以包括尺寸大小、形状、颜色等;可以判断所述虚拟对象的与运动有关的参量是否满足与运动有关的条件,例如,所述虚拟对象的与运动有关的参量可以包括位置、运动轨迹、运动速度、运动方向等,所述与运动有关的条件可以包括所述虚拟对象的预定显示位置、所述虚拟对象的预定运动轨迹、所述虚拟对象的显示位置需要避开的预定显示位置等。可以根据所述虚拟对象的实际运动轨迹判断所述虚拟对象是否完成了预定任务,所述预定任务可以例如包括按照预定运动轨迹移动、绕开障碍物移动等。
具体地,例如,在所述虚拟对象包括第一组对象且所述第一组对象包括第一对象的情况下,所述预定条件可以被设定为:所述第一对象达到目标显示位置、所述第一对象达到目标显示尺寸、所述第一对象达到目标形状、以及/或者所述第一对象达到目标显示颜色等等。
可选地,所述第一组对象还包括第二对象,所述第一对象和所述第二对象中至少一个的初始显示位置和/或初始显示形态是预先确定的或随机确定的。作为示例,所述第一对象可以为被控对象,所述第二对象可以为背景对象,可选地,所述第二对象可以作为所述第一对象的目标对象,并且所述预定条件可以被设定为:所述第一对象与所述目标对象重叠。替换地,所述背景对象可以为所述第一对象的目标运动轨迹,所述目标运动轨迹可以是随机产生的,所述预定条件可以被设定为:在所述第一对象的实际运动轨迹与所述目标运动轨迹相符。替换地,所述背景对象可以为障碍对象,所述障碍对象可以是随机显示的,其显示位置和显示时间都是随机的,所述预定条件可以被设定为:所述第一对象不与所述障碍对象相遇,即所述第一对象绕开所述障碍对象。
再例如,在所述虚拟对象还包括第二组对象且所述第二组对象包括作为被控对象的第三对象的情况下,所述预定条件还可以设定为:所述第一和/或第三对象达到相应的目标显示位置、所述第一和/或第三对象达到相应的目标显示尺寸、所述第一和/或第三对象达到相应的目标形状、以及/或者所述第一和/或第三对象达到相应的目标显示颜色等等。
在所述虚拟对象满足预定条件的情况下,在步骤S240确定所述拍摄图像中的人脸为活体人脸。反之,在所述虚拟对象不满足预定条件的情况下,在步骤S250确定所述拍摄图像中的人脸不是活体人脸。
根据本公开实施例的活体检测方法,通过将各种人脸动作参数作为虚拟 对象的状态控制参量,根据人脸动作控制在显示屏幕上显示虚拟对象,可以根据所显示的虚拟对象是否满足预定条件来进行活体检测。
图5示出了根据本公开实施例的另一活体检测方法500的示例性流程图。
在步骤S510,初始化定时器。可以根据用户输入初始化定时器,或者可以在拍摄图像中检测到人脸时自动初始化定时器,或者可以在拍摄图像中检测到人脸预定动作时自动初始化定时器。此外,在初始化定时器后,将所述第一组对象中每个对象的至少一部分显示在显示屏幕上。
在步骤S520,实时地采集预定拍摄范围的图像(第一图像)作为拍摄图像。具体地,可以利用如图1所示的用于实现本公开实施例的人脸检测方法的电子设备100中的图像采集装置110或者独立于所述电子设备100的可以向所述电子设备100传送图像的其它图像采集装置,采集预定拍摄范围的灰度或彩色图像作为拍摄图像,所述拍摄图像可以是照片,也可以是视频中的一帧。
步骤S530-S540分别与图2中的步骤S210-S220对应,在此不再进行赘述。
在步骤S550判断所述虚拟对象在预定定时时间内是否满足预定条件,所述预定定时时间可以是预先确定的。具体地,所述步骤S550可以包括判断所述定时器是否超出预定定时时间以及所述虚拟对象是否满足预定条件。可选地,在所述定时器超出所述预定定时时间时可以产生超时标志,在步骤S550中可以根据该超时标志判断定时器是否超出所述预定定时时间。
根据步骤S550的判断结果,可以在步骤S560确定检测到活体人脸、或者在步骤S570确定没有检测到活体人脸、或者返回步骤S520。
在返回步骤S520的情况下,实时地采集所述预定拍摄范围的图像(第二图像)作为拍摄图像,并且接下来执行步骤S530-S550。这里,为区分先后采集的所述预定拍摄范围的图像,将先采集的图像称为第一图像,将后采集的图像称为第二图像。应了解,第一图像和第二图像是相同取景范围内的图像,仅仅是采集的时间不同。
如图5所示的步骤S520-S550重复执行,直至根据步骤S550的判断结果确定所述虚拟对象满足预定条件从而在步骤S570确定检测到活体人脸,或者直至在步骤S520确定所述定时器超出所述预定定时时间从而在步骤S580确定没有检测到活体人脸。
尽管在图5中在步骤S550中进行定时器是否超出预定定时时间的判断,应了解本发明不限于此,可以在根据本公开实施例的活体检测方法的任一步骤中执行该判断。此外,可选地,在所述定时器超出预定定时时间的情况下产生超时标志,该超时标志可以直接触发根据本公开实施例的活体检测方法的步骤S560或S570,即确定是否检测到活体人脸。
下面,参考具体实施例来进一步描述根据本公开实施例的活体检测方法。
第一实施例
在该第一实施例中,所述虚拟对象包括第一组对象,在开始执行根据本公开实施例的活体检测方法时将所述第一组对象显示在显示屏幕上,并且所述第一组对象包括一个或多个对象。根据所检测的人脸动作更新所述第一组对象中至少一个对象在显示屏幕上的显示,其中,所述第一组对象中的所述至少一个对象为被控对象。所述第一组对象中至少一部分对象的初始显示位置和/或初始显示形态是预先确定的或随机确定的。
第一示例
在该第一示例中,所述虚拟对象为第一对象,所述人脸动作属性包括第一动作属性,所述第一对象的状态参量包括所述第一对象的第一状态参量,根据所述第一动作属性的值更新所述第一对象的第一状态参量的值,并且按照更新后的所述第一对象的第一状态参量的值在所述显示屏幕上显示所述第一对象。
可选地,所述人脸动作属性还包括第二动作属性,所述第一对象的状态参量还包括所述第一对象的第二状态参量,根据所述第二动作属性的值更新所述第一对象的第二状态参量的值,并且按照更新后的所述第一对象的第一和第二状态参量的值在所述显示屏幕上显示所述第一对象。
所述预定条件可以为所述第一对象达到目标显示位置和/或目标显示形态,所述目标显示形态可以包括目标尺寸、目标颜色、目标形状等。所述第一对象在显示屏幕上的初始显示位置和所述第一对象的目标显示位置中至少一个可以是随机确定的,所述第一对象在显示屏幕上的初始显示形态和所述第一对象的目标显示形态中至少一个可以是随机确定的。可以通过诸如文字、声音等方式向用户提示了所述目标显示位置和/或目标显示形态。
具体地,所述第一对象的第一状态参量为所述第一对象的显示位置,根据所述第一动作属性的值来控制所述第一对象的显示位置,在所述第一对象 的显示位置与所述目标显示位置重合的情况下,确定活体检测成功。例如,所述第一对象的初始显示位置是随机确定的,所述第一对象的目标显示位置可以为所述显示屏幕的左上角、右上角、左下角、右下角或中央位置等。可选地,可以通过诸如文字、声音等方式向用户提示了所述目标显示位置。所述第一对象可以为图6A中所示的第一对象A。
具体地,在初始化所述定时器时,将所述第一对象的至少一部分显示在所述显示屏幕上,所述第一对象的至少一部分的初始显示位置是随机确定的。例如,所述第一对象可以为虚拟人脸,根据所述第一动作属性的值来控制所述第一对象的显示部分和显示位置,在所述第一对象的显示位置与所述目标显示位置相同的情况下,确定活体检测成功。所述第一对象可以为图6B中所示的第一对象A。
具体地,所述第一对象的第一状态参量为所述第一对象的尺寸(颜色或形状),根据所述第一动作属性的值来控制所述第一对象的尺寸(颜色或形状),在所述第一对象的尺寸(颜色或形状)与所述目标尺寸(目标颜色或目标形状)相同的情况下,确定活体检测成功。所述第一对象可以为图6C中所示的第一对象A。
第二示例
在该第二示例中,所述虚拟对象包括第一对象和第二对象,所述人脸动作属性包括第一动作属性,所述第一对象的状态参量包括所述第一对象的第一状态参量,所述第二对象的状态参量包括所述第二对象的第一状态参量,根据所述第一动作属性的值更新所述第一对象的第一状态参量的值,并且按照更新后的所述第一对象的第一状态参量的值在所述显示屏幕上显示所述第一对象。
可选地,所述人脸动作属性还包括第二动作属性,所述第一对象的状态参量还包括所述第一对象的第二状态参量,所述第二对象的状态参量包括所述第二对象的第二状态参量,根据所述第二动作属性的值更新所述第一对象的第二状态参量的值,并且按照更新后的所述第一对象的第一和第二状态参量的值在所述显示屏幕上显示所述第一对象。
在该示例中,所述第一对象为被控对象,所述第二对象为背景对象并且为所述第一对象的目标对象。
所述预定条件可以为所述第一对象与所述第二对象重合、或者所述第一对象达到目标显示位置或/或目标显示形态,所述目标显示形态可以包括目标尺寸、目标颜色、目标形状等。具体地,所述第二对象的显示位置为所述第一对象的目标显示位置,所述第二对象的显示形态为所述第一对象的目标显示形态。
所述第一对象和所述第二对象中至少一个的状态参量的初始值可以是随机确定的。即,所述第一对象的所述状态参量中的至少一个(例如显示位置、尺寸、颜色、形状中的至少一个)的初始值可以是随机确定的,以及/或者所述第二对象的所述状态参量中的至少一个(例如显示位置、尺寸、颜色、形状中的至少一个)的初始值可以是随机确定的。具体地,例如,所述第一对象在显示屏幕上的初始显示位置和所述第二对象的显示位置中至少一个可以是随机确定的,所述第一对象在显示屏幕上的初始显示形态和所述第二对象的目标显示形态中至少一个可以是随机确定的。
图6A中示出了第一对象A以及所述第一对象A的目标对象B的显示位置的示例。所述第一对象A的第一状态参量为所述第一对象A的显示位置,根据所述第一动作属性的值来控制所述第一对象A的显示位置,在所述第一对象A的显示位置与所述目标显示位置(第二对象B的显示位置)重合的情况下,确定活体检测成功。在图6A中,不对所述第一对象A和所述目标对象B的其它状态参量进行判断,例如尺寸、颜色、形状等,而无论所述第一对象A和所述目标对象B的尺寸、颜色、形状是否相同。
图6B中示出了第一对象A以及所述第一对象A的目标对象B的显示位置的示例。在拍摄图像中首次检测到人脸时或者在初始化所述定时器时,将所述第一对象A的至少一部分以及所述第二对象B显示在所述显示屏幕上,所述第一对象A的至少一部分的初始显示位置是随机确定的。例如,所述第一对象A可以为被控虚拟人脸,所述第二对象B为目标虚拟人脸,根据所述第一动作属性的值来控制所述第一对象A的显示部分和显示位置,在所述第一对象A的显示位置与所述目标显示位置(第二对象B的显示位置)相同的情况下,确定活体检测成功。
图6C中示出了所述第一对象A以及所述第一对象A的目标对象B的尺寸的示例。所述第一对象A的第一状态参量为所述第一对象A的尺寸(颜色或形状),根据所述第一动作属性的值来控制所述第一对象A的尺寸(颜色 或形状),在所述第一对象A的尺寸(颜色或形状)与目标尺寸(目标颜色或目标形状)(第二对象B的尺寸(颜色或形状))相同的情况下,确定活体检测成功。
图6D中示出了第一对象A以及所述第一对象A的目标对象B的显示位置和尺寸的示例,其中,所述第一对象A的第一状态参量和第二状态参量分别为所述第一对象A的显示位置和显示尺寸,所述第二对象B的第一状态参量和第二状态参量分别为所述第二对象B的显示位置和显示尺寸。
在图6D所示的示例中,根据人脸动作控制所述第一对象A的显示位置和显示尺寸,具体地可以根据所述第一动作属性的值更新所述第一对象A的第一状态参量的值(显示位置坐标)并且根据所述第二动作属性的值更新所述第一对象A的第二状态参量的值(尺寸值),按照所述第一对象A的第一状态参量的值和第二状态参量的值在所述显示屏幕上显示所述第一对象A,在所述第一对象A与所述第二对象B重合的情况下,即在所述第一对象A的显示位置与所述第二对象B的显示位置重合以及所述第一对象A的显示尺寸与所述目标对象B的显示尺寸相同的情况下,确定所述拍摄图像中的人脸为活体人脸。
可选地,如图6A和6D所示,所述第一对象A和所述第二对象B的水平位置和垂直位置均不同,在此情况下,所述第一动作属性可以包括第一子动作属性和第二子动作属性,所述第一对象A的第一状态参量可以包括第一子状态参量和第二子状态参量,所述第一子状态参量的值为所述第一对象A的水平位置坐标,所述第二子状态参量的值为所述第一对象A的垂直位置坐标,可以根据所述第一子动作属性的值来更新所述第一对象A在所述显示屏幕上的水平位置坐标,并且根据所述第二子动作属性的值来更新所述第一对象A在所述显示屏幕上的垂直位置坐标。
例如,可以将所述第一动作属性定义为所述人脸在拍摄图像中的位置,并且根据人脸在拍摄图像中的位置坐标来更新所述第一对象A在所述显示屏幕上的显示位置。在此情况下,可以将所述第一子动作属性定义为人脸在拍摄图像中的水平位置并且将所述第二子动作属性定义为人脸在拍摄图像中的垂直位置,可以根据人脸在拍摄图像中的水平位置坐标来更新所述第一对象A在所述显示屏幕上的水平位置坐标,并且根据人脸在拍摄图像中的垂直位置坐标来更新所述第一对象A在所述显示屏幕上的垂直位置坐标。
再例如,可以将所述第一子动作属性定义为人脸偏转程度并且可以将所述第二子动作属性定义为人脸俯仰程度,然后可以根据人脸偏转程度的值来更新所述第一对象A在所述显示屏幕上的水平位置坐标,并且根据人脸俯仰程度的值来更新所述第一对象A在所述显示屏幕上的垂直位置坐标。
第三示例
在该第三示例中,所述虚拟对象包括第一对象和第二对象,所述第一对象为被控对象,所述第二对象为背景对象并且为所述第一对象的目标运动轨迹。所述人脸动作属性包括第一动作属性,所述第一对象的状态参量包括所述第一对象的第一状态参量,所述第一对象的第一状态参量为所述第一对象的显示位置,根据所述第一动作属性的值更新所述第一对象的第一状态参量的值,并且按照更新后的所述第一对象的第一状态参量的值控制所述第一对象在所述显示屏幕上的显示位置,相应地控制所述第一对象的运动轨迹。
可选地,所述虚拟对象还可以包括第三对象,在此情况下,所述第二对象和第三对象构成一起背景对象,所述第二对象为所述第一对象的目标运动轨迹,所述第三对象为所述第一对象的目标对象,并且所述背景对象包括所述第一对象的目标运动轨迹和目标对象。所述第三对象的状态参量包括所述第三对象的第一状态参量,所述第三对象的第一状态参量为所述第三对象的显示位置。
在图7A和图7B中示出了第一对象A、第二对象(目标对象)B、以及第三对象(目标运动轨迹)C。所述第一对象A的初始显示位置、所述目标对象B的显示位置、以及所述目标运动轨迹C中的至少一部分可以是随机确定的。
如图7A所示,在所述第一对象A的运动轨迹与所述目标运动轨迹C重合的情况下,确定活体检测成功。此外,在显示屏幕上显示一个目标对象B的情况下,所述目标对象B的状态参量可以包括所述目标对象B的第一状态参量,所述目标对象B的第一状态参量为所述目标对象B的显示位置。在此情况下,可选地,还可以在所述第一对象A的运动轨迹与所述目标运动轨迹C重合、并且所述第一对象A的显示位置与所述目标对象B的显示位置重合的情况下,确定活体检测成功。
如图7B所示,在显示屏幕上显示多个目标对象B(B1、B2、B3)以及 多段目标运动轨迹C(C1、C2、C3)的情况下,每个目标对象的状态参量可以包括该目标对象的第一状态参量,即显示位置。可以在所述第一对象A的运动轨迹依次与所述多段目标运动轨迹C中的至少一部分重合的情况下,确定活体检测成功。替换地,可以在所述第一对象A依次与所述多个目标对象中的至少一部分重合的情况下,确定活体检测成功。替换地,可以在所述第一对象A的运动轨迹依次与所述多段目标运动轨迹C中的至少一部分重合、并且所述第一对象A依次与所述多个目标对象B中的至少一部分重合的情况下,确定活体检测成功。
如图7A和图7B所示,在沿着所述目标运动轨迹C运动时,所述第一对象A的运动方向可以包括水平运动方向和垂直运动方向。具体地,所述第一动作属性可以包括第一子动作属性和第二子动作属性,所述第一对象A的第一状态参量可以包括第一子状态参量和第二子状态参量,所述第一子状态参量的值为所述第一对象A的水平位置坐标,所述第二子状态参量的值为所述第一对象A的垂直位置坐标,可以根据所述第一子动作属性的值来更新所述第一对象A在所述显示屏幕上的水平位置坐标,并且根据所述第二子动作属性的值来更新所述第一对象A在所述显示屏幕上的垂直位置坐标。
可选地,所述人脸动作属性还包括第二动作属性,所述第一对象的状态参量还包括所述第一对象的第二状态参量,所述第一对象的第二状态参量为所述第一对象的显示形态(例如,尺寸、颜色、形状等),所述第三对象的状态参量包括所述第三对象的第二状态参量,所述第三对象的第二状态参量为所述第三对象的显示形态(例如,尺寸、颜色、形状等),根据所述第二动作属性的值更新所述第一对象的第二状态参量的值,并且按照更新后的所述第一对象的第一和第二状态参量的值在所述显示屏幕上显示所述第一对象。
尽管在图6A、6C、6D、7A和7B中将目标对象B示出为具有具体形状的对象,然而应了解本发明不限于此,还可以通过
Figure PCTCN2015082815-appb-000001
来表示目标对象B。
在该第一实施例中,在应用图5所示的活体检测方法的情况下,在步骤S550判断所述定时器是否超出所述预定定时时间,并且判断所述第一对象是否满足预定条件,例如所述第一对象是否达到目标显示位置和/或目标显示形态、所述第一对象是否与目标对象重合和/或与目标对象的显示形态相同、以及/或者所述第一对象是否实现目标运动轨迹。
在步骤S550确定所述定时器超出所述预定定时时间并且所述第一对象 尚未满足所述预定条件的情况下,在步骤S570确定没有检测到活体人脸。
在步骤S550确定所述定时器没有超出所述预定定时时间并且所述第一对象满足所述预定条件的情况下,在步骤S560确定检测到活体人脸。
另一方面,在步骤S550确定所述定时器没有超出所述预定定时时间并且所述第一对象不满足所述预定条件的情况下,返回到步骤S520。
第二实施例
在该第二实施例中,所述虚拟对象包括第一组对象,在开始执行根据本公开实施例的活体检测方法时将所述第一组对象显示在显示屏幕上,并且所述第一组对象包括一个或多个对象。根据所检测的人脸动作更新所述第一组对象中至少一个对象在显示屏幕上的显示,其中,所述第一组对象中的所述至少一个对象为被控对象。所述第一组对象中至少一部分对象的初始显示位置和/或初始显示形态是预先确定的或随机确定的。
在下面的示例中,所述第一组对象包括第一对象和第二对象,所述第一对象为被控对象,所述第二对象为背景对象,所述背景对象为障碍对象,所述第一对象和所述障碍对象的初始显示位置和/或初始显示形态是随机的。所述障碍对象可以是静止的,或者可以是运动的。在所述障碍对象运动的情况下,其运动轨迹可以为直线或曲线,并且所述障碍对象可以沿垂直方向移动、沿水平方向移动、或者沿任意方向移动。可选地,所述障碍对象的运动轨迹和运动方向也是随机的。
所述人脸动作属性包括第一动作属性,所述第一对象的状态参量包括所述第一对象的第一状态参量,所述第一对象的第一状态参量为所述第一对象的显示位置,所述第二对象的状态参量包括所述第二对象的第一状态参量,所述第二对象的第一状态参量为所述第二对象的显示位置,根据所述第一动作属性的值更新所述第一对象的第一状态参量的值,并且按照更新后的所述第一对象的第一状态参量的值在所述显示屏幕上显示所述第一对象。
所述预定条件可以为:所述第一对象与所述第二对象不相遇,或者所述第一对象的显示位置与所述第二对象的显示位置之间的距离超过预定距离,所述预定距离可以根据所述第一对象的显示尺寸和所述第二对象的显示尺寸确定。可选地,所述预定条件可以为:在预定时间内所述第一对象与所述第二对象不相遇,或者所述第一对象的显示位置与所述第二对象的显示位置之 间的距离超过预定距离。
在图8A中示出了第一对象A以及障碍对象D的位置示例。所述障碍对象D可以在显示屏幕上不断移动,并且所述障碍对象D的移动方向可以是随机的,在所述第一对象A与所述障碍对象D不相遇的情况下,确定活体检测成功。优选地,在预定定时时间内所述第一对象A与所述障碍对象D一直不相遇的情况下,确定活体检测成功。替换地,在所述障碍对象D移出显示屏幕之前所述第一对象A与所述障碍对象D一直不相遇的情况下,确定活体检测成功。
可选地,所述第一组对象还包括第三对象,所述第一对象为被控对象,所述第二对象和第三对象构成背景对象,所述第二对象为障碍对象,所述第三对象是目标对象,所述障碍对象是随机显示的或随机产生的。所述第三对象的状态参量可以包括所述第三对象的第一状态参量,所述第三对象的第一状态参量为所述第三对象的显示位置。
所述预定条件可以为:所述第一对象与所述第二对象不相遇且所述第一对象与所述第三对象重合,或者所述第一对象的显示位置与所述第二对象的显示位置之间的距离超过预定距离且所述第一对象与所述第三对象重合,所述预定距离可以根据所述第一对象的显示尺寸和所述第二对象的显示尺寸确定。
在图8B中示出了第一对象A、第二对象(障碍对象)D以及第三对象(目标对象)B。所述障碍对象D可以在显示屏幕上不断移动,并且所述障碍对象D的移动方向可以是随机的,在所述第一对象A与所述障碍对象D不相遇且所述第一对象A与所述目标对象B重合的情况下,确定活体检测成功。优选地,在预定定时时间内所述第一对象A与所述障碍对象D不相遇且所述第一对象A的显示位置与所述目标对象B的显示位置重合的情况下,确定活体检测成功。
在该第二实施例中,在应用图5所示的活体检测方法的情况下,在步骤S550判断所述定时器是否超出所述预定定时时间,并且判断所述第一对象是否满足预定条件,例如所述预定条件为:所述第一对象不与所述障碍对象相遇(图8A)、所述第一对象与所述目标对象重合(图8B-1)、所述第一对象与所述目标对象重合且不与所述障碍对象相遇(图8B-2)。
针对图8A所示的示例,在步骤S550确定所述定时器超出所述预定定时 时间并且所述第一对象一直不与所述障碍对象相遇的情况下,在步骤S560确定检测到活体人脸;在步骤S550确定所述定时器没有超出所述预定定时时间并且所述第一对象一直不与所述障碍对象相遇的情况下,返回到步骤S520;另一方面,在步骤S550确定所述定时器没有超出所述预定定时时间并且所述第一对象与所述障碍对象相遇的情况下,在步骤S570确定没有检测到活体人脸。
针对图8B-1所示的示例,在步骤S550确定所述定时器超出所述预定定时时间并且所述第一对象未与所述目标对象重合的情况下,在步骤S570确定没有检测到活体人脸;在步骤S550确定所述定时器没有超出所述预定定时时间并且所述第一对象与所述目标对象重合的情况下,在步骤S560确定检测到活体人脸;另一方面,在步骤S550确定所述定时器没有超出所述预定定时时间并且所述第一对象未与所述目标对象重合的情况下,返回到步骤S520。
针对图8B-2所示的示例,在步骤S550确定所述定时器超出所述预定定时时间并且所述第一对象未与所述目标对象重合的情况下,或者在步骤S550确定所述定时器未超出所述预定定时时间并且所述第一对象与所述障碍对象相遇的情况下,在步骤S570确定没有检测到活体人脸;在步骤S550确定所述定时器没有超出所述预定定时时间并且所述第一对象与所述目标对象重合且一直不与所述障碍对象相遇的情况下,在步骤S560确定检测到活体人脸;另一方面,在步骤S550确定所述定时器没有超出所述预定定时时间并且所述第一对象未与所述目标对象重合且不与所述障碍对象相遇的情况下,返回到步骤S520。
在图8A和8B所示的示例中,所述第一动作属性可以包括第一子动作属性和第二子动作属性,所述第一对象A的第一状态参量可以包括第一子状态参量和第二子状态参量,所述第一子状态参量的值为所述第一对象A的水平位置坐标,所述第二子状态参量的值为所述第一对象A的垂直位置坐标,可以根据所述第一子动作属性的值来更新所述第一对象A在所述显示屏幕上的水平位置坐标,并且根据所述第二子动作属性的值来更新所述第一对象A在所述显示屏幕上的垂直位置坐标。
第三实施例
在该第三实施例中,所述虚拟对象包括第一组对象和第二组对象,在开 始执行根据本公开实施例的活体检测方法时将所述第一组对象显示在显示屏幕上,并且所述第一组对象包括一个或多个对象,在开始执行根据本公开实施例的活体检测方法时所述第二组对象尚未显示在显示屏幕上并且包括一个或多个对象。根据所检测的人脸动作更新所述第一组对象中至少一个对象在显示屏幕上的显示,其中,所述第一组对象中的所述至少一个对象为被控对象。可选地,所述第一组对象中至少一部分对象的初始显示位置和/或初始显示形态是预先确定的或随机确定的。
可选地,根据所述第一组对象中至少一个对象的显示情况显示所述第二组对象中至少一个对象。替代地,可以根据所检测的人脸动作显示所述第二组对象中至少一个对象。可选地,所述第二组对象中至少一部分对象的初始显示位置和/或初始显示形态是预先确定的或随机确定的。
在该实施例中,所述第一组对象中每个对象的第一状态参量为该对象的显示位置,并且所述第二组对象中每个对象的第一和第二状态参量分别为该对象的显示位置和可视状态。
第一示例
在该第一示例中,根据所述第一组对象中至少一个对象的显示情况显示所述第二组对象中至少一个对象。
具体地,所述第一组对象包括第一对象和第二对象,所述第一对象为被控对象,所述第二对象为背景对象,所述第二组对象中的每个对象也为背景对象。所述预定条件可以为:所述第一组对象中的被控对象依序与所述第二对象以及所述第二组对象中的每个对象重合。
如图9A所示,所述第一组对象包括第一对象A和第二对象B1,所述第二组对象包括第三对象B2和第四对象B3,所述第一对象A为被控对象,所述第二对象B1、所述第三对象B2和第四对象B3均为背景对象,所述背景对象为目标对象。
所述人脸动作属性包括第一动作属性,所述第一对象A的状态参量包括所述第一对象A的第一状态参量,所述第二对象B1的状态参量包括所述第二对象B1的第一状态参量,所述第三对象B2的状态参量包括所述第三对象B2的第一状态参量,所述第四对象B3的状态参量包括所述第四对象B3的第一状态参量。
首先,根据所述第一动作属性的值更新所述第一对象A的第一状态参量 的值,并且按照更新后的所述第一对象A的第一状态参量的值在所述显示屏幕上显示所述第一对象A。
在所述第一对象A与所述第二对象B1的显示位置重合之后,将所述第二组对象中第三对象B2的第二状态参量的值设置为表示可视的值,以显示所述第二组对象中的第三对象B2。可选地,可以继续根据所述第一动作属性的值更新所述第一对象A的第一状态参量的值,并且按照更新后的所述第一对象A的第一状态参量的值在所述显示屏幕上显示所述第一对象A。替换地,所述人脸动作属性还可以包括与所述第一动作属性不同的第二动作属性,可以继续根据所述第二动作属性的值更新所述第一对象A的第一状态参量的值,并且按照更新后的所述第一对象A的第一状态参量的值在所述显示屏幕上显示所述第一对象A。
在所述第一对象A与所述第三对象B2的显示位置重合之后,将所述第二组对象中第四对象B3的第二状态参量的值设置为表示可视的值,以显示所述第二组对象中的第四对象B3。可选地,可以继续根据所述第一或第二动作属性的值更新所述第一对象A的第一状态参量的值,并且按照更新后的所述第一对象A的第一状态参量的值在所述显示屏幕上显示所述第一对象A。替换地,所述人脸动作属性还可以包括与所述第一和第二动作属性不同的第三动作属性,可以继续根据所述第三动作属性的值更新所述第一对象A的第一状态参量的值,并且按照更新后的所述第一对象A的第一状态参量的值在所述显示屏幕上显示所述第一对象A。
在所述第一对象A依次与所述第二对象B1、第三对象B2和第四对象B3重合的情况下,确定活体检测成功。可选地,在预定时间内在所述第一对象A依次与所述第二对象B1、第三对象B2和第四对象B3重合的情况下,确定活体检测成功。
在应用图5所示的活体检测方法的情况下,在步骤S550判断所述定时器是否超出所述预定定时时间,并且判断所述第一对象A是否依次与第二对象B1、第三对象B2和第四对象B3重合。
在步骤S550确定所述定时器超出所述预定定时时间并且所述第一对象A与第二对象B1、第三对象B2和第四对象B3均未重合、或未与第三对象B2和第四对象B3均未重合、或者未与第四对象B3重合的情况下,在步骤S570确定没有检测到活体人脸。
在步骤S550确定所述定时器没有超出所述预定定时时间并且所述第一对象A依次与第二对象B1、第三对象B2和第四对象B3重合的情况下,在步骤S560确定检测到活体人脸。
另一方面,在步骤S550确定所述定时器没有超出所述预定定时时间并且所述第一对象A与第二对象B1、第三对象B2和第四对象B3均未重合、或未与第三对象B2和第四对象B3均未重合、或者未与第四对象B3重合的情况下,返回到步骤S520。
更具体地,在从步骤S550返回到步骤S520的情况下,还可以执行以下步骤:判断是否显示了所述第四对象,在确定尚未显示所述第四对象的情况下判断是否显示了所述第三对象,在确定尚未显示所述第三对象的情况下判断所述第一对象是否与所述第二对象重合,并且在确定所述第一对象与所述第二对象重合的情况下显示所述第三对象,然后再返回到步骤S520;在确定尚未显示所述第四对象但显示了所述第三对象的情况下判断所述第一对象是否与所述第三对象重合,并且在确定所述第一对象与所述第三对象重合的情况下显示所述第四对象,然后再返回到步骤S520。
可选地,可以设定所述第二组对象中包含的对象的数量,并且在所述第一对象A依次与所述第二对象B1以及所述第二组对象中的每个对象重合的情况下,确定活体检测成功。
第二示例
在该第二示例中,根据所述第一组对象中至少一个对象的显示情况显示所述第二组对象中至少一个对象,所述第二组对象中至少一部分对象为被控对象。
具体地,所述第一组对象包括第一对象和第二对象,所述第一对象为被控对象,所述第二对象为背景对象,所述第二组对象中的每个对象也为被控对象。所述预定条件可以为:所述第一对象和所述第二组对象中的每个对象依序与所述第二对象重合。
如图9B所示,所述第一组对象包括第一对象A1和第二对象B,所述第二组对象包括第三对象A2和第四对象A3,所述第一对象A1、所述第三对象A2和第四对象A3为被控对象,所述第二对象B为背景对象。
所述人脸动作属性包括第一动作属性,所述第一对象A1的状态参量包 括所述第一对象A1的第一状态参量,所述第二对象B的状态参量包括所述第二对象B的第一状态参量,所述第三对象A2的状态参量包括所述第三对象A2的第一状态参量,所述第四对象A3的状态参量包括所述第四对象A3的第一状态参量。
首先,根据所述第一动作属性的值更新所述第一对象A1的第一状态参量的值,并且按照更新后的所述第一对象A1的第一状态参量的值在所述显示屏幕上显示所述第一对象A1。
在所述第一对象A1与所述第二对象B的显示位置重合之后,将所述第二组对象中第三对象A2的第二状态参量的值设置为表示可视的值,以显示所述第二组对象中的第三对象A2。可选地,可以继续根据所述第一动作属性的值更新所述第三对象A2的第一状态参量的值,并且按照更新后的所述第三对象A2的第一状态参量的值在所述显示屏幕上显示所述第三对象A2,而所述第一对象A1的显示位置保持不变。替换地,所述人脸动作属性还可以包括与所述第一动作属性不同的第二动作属性,可以继续根据所述第二动作属性的值更新所述第三对象A2的第一状态参量的值,并且按照更新后的所述第三对象A2的第一状态参量的值在所述显示屏幕上显示所述第三对象A2。
在所述第三对象A2与所述第二对象B的显示位置重合之后,将所述第二组对象中第四对象A3的第二状态参量的值设置为表示可视的值,以显示所述第二组对象中的第四对象A3。可选地,可以继续根据所述第一或第二动作属性的值更新所述第四对象A3的第一状态参量的值,并且按照更新后的所述第四对象A3的第一状态参量的值在所述显示屏幕上显示所述第四对象A3,而所述第一和第二对象A1和A2的显示位置保持不变。替换地,所述人脸动作属性还可以包括与所述第一和第二动作属性不同的第三动作属性,可以继续根据所述第三动作属性的值更新所述第四对象A3的第一状态参量的值,并且按照更新后的所述第四对象A3的第一状态参量的值在所述显示屏幕上显示所述第四对象A3。
在所述第一对象A1、所述第三对象A2以及所述第四对象A3依次与所述第二对象B重合的情况下,确定活体检测成功。可选地,在预定时间内在所述第一对象A1、所述第三对象A2以及所述第四对象A3依次与所述第二对象B的情况下,确定活体检测成功。
在应用图5所示的活体检测方法的情况下,在步骤S550判断所述定时器是否超出所述预定定时时间,并且判断所述第一对象A1、所述第三对象A2以及所述第四对象A3是否依次与所述第二对象B重合。
在步骤S550确定所述定时器超出所述预定定时时间并且所述第一对象A1未与所述第二对象B重合、或所述第三对象A2未与所述第二对象B重合、或所述第四对象A3未与所述第二对象B重合的情况下,在步骤S570确定没有检测到活体人脸。
在步骤S550确定所述定时器没有超出所述预定定时时间并且所述第一对象A1、所述第三对象A2以及所述第四对象A3依次与所述第二对象B重合的情况下,在步骤S560确定检测到活体人脸。
另一方面,在步骤S550确定所述定时器没有超出所述预定定时时间并且第一对象A1未与所述第二对象B重合、或所述第三对象A2未与所述第二对象B重合、或所述第四对象A3未与所述第二对象B重合的情况下,返回到步骤S520。
更具体地,在从步骤S550返回到步骤S520的情况下,还可以执行以下步骤:判断是否显示了所述第四对象,在确定尚未显示所述第四对象的情况下判断是否显示了所述第三对象,在确定尚未显示所述第三对象的情况下判断所述第一对象是否与所述第二对象重合,并且在确定所述第一对象与所述第二对象重合的情况下显示所述第三对象,然后再返回到步骤S520;在确定尚未显示所述第四对象但显示了所述第三对象的情况下判断所述第三对象是否与所述第二对象重合,并且在确定所述第三对象与所述第二对象重合的情况下显示所述第四对象,然后再返回到步骤S520。
可选地,可以设定所述第二组对象中包含的对象的数量,并且在所述第一对象A1、所述第二组对象中的每个对象依次与所述第二对象B重合的情况下,确定活体检测成功。
第三示例
在该第三示例中,根据所述第一组对象中至少一个对象的显示情况显示所述第二组对象中至少一个对象,所述第二组对象中至少一部分对象为被控对象。
具体地,如图9C所示,所述第一组对象包括第一对象A1和第二对象 B1,所述第一对象A1为被控对象,所述第二对象B1为背景对象,所述第二组对象包括第三对象A2和第四对象B2、以及第五对象A3和第六对象B3,所述第三对象A2和第五对象A3均为被控对象,所述第四对象B2和第六对象B3均为背景对象。所述预定条件可以为:所述第二对象B1与所述第一对象A1、所述第四对象B2与所述第三对象A1、以及所述第六对象B3与所述第五对象A3重合。
所述人脸动作属性包括第一动作属性。首先,根据所述第一动作属性的值更新所述第一对象A1的第一状态参量的值,并且按照更新后的所述第一对象A1的第一状态参量的值在所述显示屏幕上显示所述第一对象A1。
在所述第一对象A1与所述第二对象B1的显示位置重合之后,显示所述第二组对象中的第三对象A2和第四对象B2。可选地,可以继续根据所述第一动作属性的值更新所述第三对象A2的第一状态参量的值,并且按照更新后的所述第三对象A2的第一状态参量的值在所述显示屏幕上显示所述第三对象A2。替换地,所述人脸动作属性还可以包括与所述第一动作属性不同的第二动作属性,可以继续根据所述第二动作属性的值更新所述第三对象A2的第一状态参量的值,并且按照更新后的所述第三对象A2的第一状态参量的值在所述显示屏幕上显示所述第三对象A2。
在所述第三对象A2与所述第四对象B2的显示位置重合之后,显示所述第二组对象中的第五对象A3。可选地,可以继续根据所述第一或第二动作属性的值更新所述第五对象A3的第一状态参量的值,并且按照更新后的所述第五对象A3的第一状态参量的值在所述显示屏幕上显示所述第五对象A3。替换地,所述人脸动作属性还可以包括与所述第一和第二动作属性不同的第三动作属性,可以继续根据所述第三动作属性的值更新所述第五对象A3的第一状态参量的值,并且按照更新后的所述第五对象A3的第一状态参量的值在所述显示屏幕上显示所述第五对象A3。
在所述第一对象A1、所述第三对象A2以及所述第五对象A3依次与所述第二对象B1、第四对象B2以及第六对象B3重合的情况下,确定活体检测成功。可选地,在预定时间内在所述第一对象A1、所述第三对象A2以及所述第五对象A3依次与所述第二对象B1、第四对象B2以及第六对象B3重合的情况下,确定活体检测成功。
在应用图5所示的活体检测方法的情况下,在步骤S550判断所述定时器 是否超出所述预定定时时间,并且判断第一对象A1、所述第三对象A2以及所述第五对象A3是否依次与所述第二对象B1、第四对象B2以及第六对象B3重合。
在步骤S550确定所述定时器超出所述预定定时时间并且所述第五对象A3未与第六对象B3重合、或所述第三对象A2未与第四对象B2重合、或所述第一对象A1未与第二对象B1重合的情况下,在步骤S570确定没有检测到活体人脸。
在步骤S550确定所述定时器没有超出所述预定定时时间并且所述第一对象A1、所述第三对象A2以及所述第五对象A3依次与所述第二对象B1、第四对象B2以及第六对象B3重合的情况下,在步骤S560确定检测到活体人脸。
另一方面,在步骤S550确定所述定时器没有超出所述预定定时时间并且所述第五对象A3未与第六对象B3重合、或所述第三对象A2未与第四对象B2重合、或所述第一对象A1未与第二对象B1重合的情况下,返回到步骤S520。
更具体地,在从步骤S550返回到步骤S520的情况下,还可以执行以下步骤:判断是否显示了所述第五和第六对象,在确定尚未显示所述第五和第六对象的情况下判断是否显示了所述第三和第四对象,在确定尚未显示所述第三和第四对象的情况下判断所述第一对象是否与所述第二对象重合,并且在确定所述第一对象与所述第二对象重合的情况下显示所述第三和第四对象,然后再返回到步骤S520;在确定尚未显示所述第五和第六对象但显示了所述第三和第四对象的情况下判断所述第三对象是否与所述第四对象重合,并且在确定所述第三对象是否与所述第四对象重合的情况下显示所述第五和第六对象,然后再返回到步骤S520。
可选地,可以设定所述第二组对象中包含的对象对的数量,其中对象A2和对象B2可以被视为一个对象对,并且在所述每个对象Ai依次与其对应的对象Bi重合的情况下,确定活体检测成功。可选地,在预定时间内在所述每个对象Ai依次与其对应的对象Bi重合的情况下,确定活体检测成功。
第四示例
在该第四示例中,根据所检测的人脸动作显示所述第二组对象中至少一 个对象。
具体地,如图9D所示,所述第一组对象包括第一对象A1和第二对象B,所述第一对象A为被控对象,所述第二对象B为背景对象,所述第二组对象包括第三对象A2,所述第二对象B为所述第一对象A1和所述第三对象A2的目标对象B。所述预定条件可以为:所述第三对象A2与所述第二对象B重合,或者所述第一和第三对象A1和A2依次与所述第二对象重合。
所述第一对象A1以及所述目标对象B中至少一个的状态参量的值可以是随机确定的。例如,所述第一对象A1的显示位置是随机确定的,以及/或者所述目标对象B的显示位置是随机确定的。
所述人脸动作属性包括第一动作属性和第二动作属性,根据所述第一动作属性的值更新所述第一对象的显示位置坐标,根据所述第二动作属性的值更新所述第二对象的可视状态值,例如,可视状态值为0指示不可视,即不显示所述第二对象;可视状态值为1指示可视,即显示所述第二对象。可选地,预设条件可以为:所述第三对象A2的显示位置与所述第二对象B的显示位置重合。替代地,预设条件可以为:所述第一对象A1和第三对象A2的显示位置与所述目标对象B的显示位置重合。
具体地,初始显示所述第一对象A1并且不显示所述第三对象A2,根据所述第一动作属性改变所述第一对象A1的显示位置,根据所述第二动作属性改变所述第二对象的可视状态,并且根据所述第二动作属性值发生改变时所述第一对象A1的显示位置确定所述第三对象A2的显示位置。例如,所述第三对象A2的显示位置与所述第二动作属性值发生改变时所述第一对象A1的显示位置相同,在所述第三对象A2的显示位置与所述目标对象B的显示位置重合的情况下,确定活体检测成功。
针对图11C所示的示例,在活体检测中,仅在以下场景下才确定活体检测成功,即:根据所述第一动作属性改变所述第一对象A1的显示位置,将所述第一对象A1移动到所述目标对象B处,然后在所述第一对象A1位于所述目标对象B处时检测到所述第二动作属性的改变,并据此在所述目标对象B处显示所述第三对象A2。具体地,例如所述第一对象A1为瞄准器,所述第二对象B为靶心,所述第三对象A2为子弹。
在应用图5所示的活体检测方法的情况下,在步骤S550判断所述定时器是否超出所述预定定时时间,并且判断所述第三对象A2是否与所述第二对 象B重合。
在步骤S550确定所述定时器超出所述预定定时时间并且所述第三对象A2尚未显示、或所述第三对象A2已经显示但未与第二对象B重合的情况下,在步骤S570确定没有检测到活体人脸。
在步骤S550确定所述定时器没有超出所述预定定时时间并且所述第三对象A2与所述第二对象B重合的情况下,在步骤S560确定检测到活体人脸。
另一方面,在步骤S550确定所述定时器没有超出所述预定定时时间并且所述第三对象A2尚未显示的情况下,返回到步骤S520。
第五示例
在该第五示例中,根据所检测的人脸动作显示所述第二组对象中至少一个对象,所述第二组对象中至少一部分对象为被控对象。
如图9E所示,所述第一组对象包括第一对象A1和第二对象B1,所述第一对象A1为被控对象,所述第二对象B1为背景对象,所述第二组对象包括第三对象A2和第四对象B2,所述第三对象A2为被控对象,所述第四对象B2为背景对象。所述预定条件可以为:第一对象A1和第二对象B1重合以及第三对象A2和第四对象B2重合。
所述第一对象A1、第二对象B1、第三对象A2和第四对象B2中至少一个的状态参量的值可以是随机确定的。例如,所述第一对象A1、第二对象B1、第三对象A2和第四对象B2的显示位置是随机确定的。
所述人脸动作属性包括第一动作属性和第二动作属性。根据所述第一动作属性的值更新所述第一对象A1的显示位置坐标,根据所述第二动作属性的值更新所述第三和第四对象的可视状态值,例如,可视状态值为0指示不可视,即不显示所述第三和第四对象;可视状态值为1指示可视,即显示所述第三和第四对象。
此外,还可以根据所述第一动作属性的值更新所述第三对象的显示位置坐标。可选地,所述人脸动作属性还包括与所述第一动作属性不同的第三动作属性,根据所述第三动作属性的值更新所述第三对象的显示位置坐标。
具体地,初始显示所述第一对象A1和第二对象B1但不显示所述第三对象A2和第四对象B2,根据所述第一动作属性改变所述第一对象A1的显示位置,根据所述第二动作属性改变所述第二对象的可视状态。可以根据所述 第二动作属性值发生改变时所述第一对象A1的显示位置确定所述第三对象A2的初始显示位置,或者可以随机地确定所述第三对象A2的初始显示位置。在该示例中,仅在以下场景下才确定活体检测成功,即:根据所述第一动作属性改变所述第一对象A1的显示位置,将所述第一对象A1移动到所述第二对象B1处,然后在所述第一对象A1位于所述第二对象B处时检测到所述第二动作属性的改变,并据此在随机位置或者根据所述第二对象B1的显示位置所确定的显示位置处显示所述第三对象A2,并随机地显示所述第四对象B,然后根据所述第一动作属性或与第一动作属性不同的第三动作属性改变所述第三对象A3的显示位置,直至将所述第三对象A2移动到所述第四对象B2处。
如前所述,所述第一动作属性可以包括第一子动作属性和第二子动作属性,所述第一对象A1的第一状态参量可以包括第一子状态参量和第二子状态参量,所述第一对象A1的所述第一子状态参量的值和所述第二子状态参量的值分别为所述第一对象A的水平位置坐标和垂直位置坐标,可以根据所述第一子动作属性的值和所述第二子动作属性的值来分别更新所述第一对象A在所述显示屏幕上的水平位置坐标和垂直位置坐标。
此外,所述第三动作属性也可以包括第三子动作属性和第四子动作属性,所述第二对象A2的第一状态参量可以包括第一子状态参量和第二子状态参量,所述第二对象A2的第一子状态参量的值和第二子状态参量的值分别为所述第二对象A2的水平位置坐标和垂直位置坐标,可以根据所述第三子动作属性的值和所述第四子动作属性的值来分别更新所述第二对象A2在所述显示屏幕上的水平位置坐标和垂直位置坐标。
例如,可以将所述第一子动作属性和第二子动作属性分别定义为人脸偏转程度和人脸俯仰程度,或者可以将所述第三子动作属性和第四子动作属性分别定义为眼睛左右转动程度和眼睛上下转动程度。
第四实施例
在该第四实施例中,所述虚拟对象包括第一组对象和第二组对象,在开始执行根据本公开实施例的活体检测方法时将所述第一组对象显示在显示屏幕上,并且所述第一组对象包括一个或多个对象,在开始执行根据本公开实施例的活体检测方法时所述第二组对象尚未显示在显示屏幕上并且包括一个 或多个对象。根据所检测的人脸动作更新所述第一组对象中至少一个对象在显示屏幕上的显示,其中,所述第一组对象中的所述至少一个对象为被控对象。所述第一组对象中至少一部分对象的初始显示位置和/或初始显示形态是预先确定的或随机确定的。
可选地,根据所述第一组对象中至少一个对象的显示情况显示所述第二组对象中至少一个对象。替代地,可以根据所检测的人脸动作显示所述第二组对象中至少一个对象。可选地,所述第二组对象中至少一部分对象的初始显示位置和/或初始显示形态是预先确定的或随机确定的。
在该实施例中,所述第一组对象中每个对象的第一状态参量为该对象的显示位置,并且所述第二组对象中每个对象的第一和第二状态参量分别为该对象的显示位置和可视状态。
在本实施例中,所述第一组对象包括第一对象和第二对象,所述第二组对象包括多个对象,所述第一对象为被控对象,所述第二对象以及所述第二组对象为背景对象,所述背景对象为障碍对象,所述第一对象和所述障碍对象的初始显示位置和/或初始显示形态是随机的。在所述障碍对象运动的情况下,其运动轨迹可以为直线或曲线,并且所述障碍对象可以沿垂直方向移动、沿水平方向移动、或者沿任意方向移动。可选地,所述障碍对象的运动轨迹和运动方向也是随机的。
所述人脸动作属性包括第一动作属性,所述第一对象的状态参量包括所述第一对象的第一状态参量,所述第一对象的第一状态参量为所述第一对象的显示位置,根据所述第一动作属性的值更新所述第一对象的第一状态参量的值,并且按照更新后的所述第一对象的第一状态参量的值在所述显示屏幕上显示所述第一对象。
所述预定条件可以为:所述第一对象与所述障碍对象均不相遇,或者所述第一对象的显示位置与所述第二对象的显示位置之间的距离超过预定距离,所述预定距离可以根据所述第一对象的显示尺寸和所述第二对象的显示尺寸确定。可选地,所述预定条件可以为:在预定时间内所述第一对象与所述障碍对象不相遇、所述第一对象与预定数量的障碍对象不相遇、或在预定时间内所述第一对象与预定数量的障碍对象不相遇。
第一示例
在该第一示例中,根据所述第一组对象中至少一个对象的显示情况显示 所述第二组对象中至少一个对象。所述第二组对象中对象是非被控对象,即背景对象,所述背景对象为障碍对象。
在图10A中示出了第一对象A以及障碍对象D的位置示例。所述障碍对象D可以在显示屏幕上不断移动,并且所述障碍对象D的移动方向可以是随机的。
在所述障碍对象D移动出所述显示屏幕时,显示所述第二组对象中的障碍对象D2,而在所述障碍对象D2移出所述显示屏幕时,显示所述第二组对象中的障碍对象D3。依此类推,直至达到预定定时时间,或者显示了预定数量的障碍对象。
可选地,在预定定时时间内所述第一对象A与所述障碍对象一直不相遇的情况下,确定活体检测成功。替换地,所述第一对象A与预定数量的障碍对象不相遇的情况下,确定活体检测成功。替换地,在预定定时时间内所述第一对象A与预定数量的障碍对象不相遇的情况下,确定活体检测成功。
可选地,所述第一组对象还包括第三对象,所述第二对象和第三对象构成背景对象,所述第三对象是目标对象。所述预定条件可以为:在预定定时时间内所述第一对象与所述障碍对象一直不相遇且所述第一对象与所述第三对象重合。
在图10B中示出了第一组对象中的第一对象A、第二对象(障碍对象)D以及第三对象(目标对象)B、以及第二组对象中的障碍对象D1和D2。所述障碍对象可以在显示屏幕上不断移动,并且所述障碍对象D的移动方向可以是随机的,在所述第一对象A与所述障碍对象均不相遇且所述第一对象A与所述目标对象B重合的情况下,确定活体检测成功。优选地,在预定定时时间内所述第一对象A与所述障碍对象均不相遇且所述第一对象A的显示位置与所述目标对象B的显示位置重合的情况下,确定活体检测成功。
例如,在所述预定条件为所述第一对象A与预定数量的障碍对象不相遇的情况下,在步骤S550可以判断所述第一对象A与当前显示的障碍对象是否相遇、当前显示的障碍对象是否移出显示屏幕以及已经显示的障碍对象的数量是否达到预定数量。在步骤S550确定所述第一对象A与当前显示的障碍对象不相遇、当前显示的障碍对象移出显示屏幕以及已经显示的障碍对象的数量未达到预定数量的情况下,在显示屏幕上显示新的障碍对象,并且返回步骤S520;而在步骤S550确定所述第一对象A与当前显示的障碍对象不 相遇以及当前显示的障碍对象仍在显示屏幕显示的情况下,返回步骤S520。在步骤S550确定所述第一对象A与当前显示的障碍对象相遇的情况下,在步骤S570确定没有检测到活体人脸。在步骤S550确定所述第一对象A与当前显示的障碍对象不相遇、当前显示的障碍对象移出显示屏幕以及已经显示的障碍对象的数量达到预定数量的情况下,在步骤S560确定检测到活体人脸。
第二示例
在该第二示例中,根据所述第一组对象中至少一个对象的显示情况显示所述第二组对象中至少一个对象。可选地,还根据所述第二组对象中至少一个对象的显示情况显示所述第二组对象中其它至少一个对象。所述第二组对象中对象是非被控对象,即背景对象,所述背景对象为障碍对象。
具体地,所述第一组对象包括第一对象和第二对象,根据所检测的人脸动作更新所述第一对象和第二对象在显示屏幕上的显示。具体地,所述第一对象的垂直显示位置固定,根据所检测的人脸动作更新所述第一对象的水平显示位置以及所述第二对象的水平和垂直显示位置。
可选地,还根据所述第二对象的显示情况来显示所述第二组对象中的障碍对象,并且还可以根据第二组对象中障碍对象的显示情况来显示所述第二组对象中新的障碍对象。具体地,根据所检测的人脸动作更新所述第一对象的水平显示位置以及所述第二组对象中障碍对象的水平和垂直显示位置。
所述人脸动作属性可以包括第一动作属性和第二动作属性,所述第一对象的状态参量包括所述第一对象的第一和第二状态参量,所述第一对象的第一和第二状态参量分别为所述第一对象的行进参量和水平位置,所述行进参量可以为运动速度、行进距离等。例如,在所述行进参量为运动速度的情况下,首先,根据所述第一动作属性的值更新第一对象的运动速度的值,并且根据所述第二动作属性的值更新第一对象的水平位置坐标。其次,根据所述第一对象A的运动速度的值、所述第一对象A与所述障碍对象D之间的距离(可以包括水平距离和垂直距离)、以及所述第一对象A的水平位置坐标,确定所述障碍对象D和所述第一对象A的显示位置。例如,在所述第一对象的目标前进方向为道路延伸方向(如图10C中道路变窄的方向)、以及所述第一对象A的垂直显示位置保持不变的情况下,可以根据所述第一对象A的运 动速度的值以及所述第一对象A与所述障碍对象D之间的垂直距离,确定是否继续显示所述障碍对象D、以及所述障碍对象D的显示位置,并且可以根据所述第一对象A的水平位置坐标确定所述第一对象A的显示位置。
具体地,例如,所述第一对象A可以为汽车,所述障碍对象D可以是在汽车前进的道路上随机产生的石头,所述第一动作属性可以为人脸俯仰程度,所述第二动作属性可以为人脸偏转程度,所述第一对象A的第一状态参量和第二状态参量可以分别为所述第一对象的运动速度和水平位置。例如,可以将人脸平视状态对应于运动速度V0,将人脸30度或45度仰视状态对应于最高运动速度VH,将人脸30度或45度俯视状态对应于最低运动速度VL,根据人脸俯仰程度的值(例如,人脸俯仰角度)确定第一对象的运动速度。例如,可以将人脸正视状态对应于中间位置P0,将人脸30度或45度左偏状态对应于左侧边缘位置PL,将人脸30度或45度右偏状态对应于右侧边缘位置PR,根据人脸偏转程度的值(例如,人脸偏转角度)确定第一对象的水平位置坐标。
此外,所述第一对象的状态参量还包括所述第一对象的第三状态参量,所述第三状态参量可以为所述第一对象的行进距离。可选地,在所述第一对象与障碍对象不相遇并且所述第一对象在预定时间内的行进距离达到预设距离值的情况下,确定活体检测成功。
上面已经在第一到第四实施例中描述了根据本公开实施例的活体检测方法的具体实现方式,应了解,可以根据需要组合第一到第四实施例中的各种具体操作。
接下来,将参考图11和图12来描述根据本公开实施例的活体检测设备。所述活体检测设备可以是集成了人脸图像采集装置的电子设备,诸如智能手机、平板电脑、个人计算机、基于人脸识别的身份识别设备等。替代地,所述活体检测设备还可以包括分离的人脸图像采集装置和检测处理装置,所述检测处理装置可以从所述人脸图像采集装置接收拍摄图像,并且依据所接收的拍摄图像进行活体检测。所述检测处理装置可以为服务器、智能手机、平板电脑、个人计算机、基于人脸识别的身份识别设备等。
由于该活体检测设备执行各个操作的细节与上文中针对图2-4描述的活体检测方法的细节基本相同,因此为了避免重复,在下文中仅对所述活体检测设备进行简要的描述,而省略对相同细节的描述。
如图11所示,根据本公开实施例的活体检测设备1100包括人脸动作检测装置1110、虚拟对象控制装置1120、以及活体判断装置1130。人脸动作检测装置1110、虚拟对象控制装置1120、以及活体判断装置1130可以由图1所示的处理器102实现。
如图12所示,根据本公开实施例的活体检测设备1200包括图像采集装置1240、人脸动作检测装置1110、虚拟对象控制装置1120、活体判断装置1130、显示装置1250以及存储装置1260。图像采集装置1240可以由图1所示的图像采集装置110实现,人脸动作检测装置1110、虚拟对象控制装置1120、以及活体判断装置1130可以由图1所示的处理器102实现,显示装置1250可以由图1所示的输出装置108实现,存储装置1260可以由图1所示的存储装置104实现。
可以利用活体检测设备1200中的图像采集装置1240或者独立于所述活体检测设备1100或1200的可以向所述活体检测设备1100或1200传送图像的其它图像采集装置,采集预定拍摄范围的灰度或彩色图像作为拍摄图像,所述拍摄图像可以是照片,也可以是视频中的一帧。所述图像采集设备可以是智能电话的摄像头、平板电脑的摄像头、个人计算机的摄像头、或者甚至可以是网络摄像头。
人脸动作检测装置1110被配置为从拍摄图像中检测人脸动作。
如图13所示,人脸动作检测装置1110可以包括关键点定位装置1310、纹理信息提取装置1320、以及动作属性确定装置1330。
所述关键点定位装置1310被配置为在所述拍摄图像中定位人脸关键点。作为示例,所述关键点定位装置1310可以首先确定所获取的图像中是否包含人脸,在检测到人脸的情况下定位出人脸关键点。所述关键点定位装置1310操作的细节与步骤S310中描述的细节相同,在此不再赘述。
所述纹理信息提取装置1320被配置为从所述拍摄图像中提取图像纹理信息。作为示例,所述纹理信息提取装置1320可以根据所述拍摄图像中的像素信息,例如像素点的亮度信息,提取人脸的精细信息,例如眼球位置信息、口型信息、微表情信息等等。
所述动作属性确定装置1330基于所定位的人脸关键点以及/或者所述图像纹理信息,获得人脸动作属性的值。基于所定位的人脸关键点获得的所述人脸动作属性可以例如包括但不限于眼睛睁闭程度、嘴巴张闭程度、人脸俯 仰程度、人脸偏转程度、人脸与摄像头的距离等。基于所述图像纹理信息获得的所述人脸动作属性可以包括但不限于眼球左右偏转程度、眼球上下偏转程度等等。所述动作属性确定装置1330操作的细节与步骤S330中描述的细节相同,在此不再赘述。
所述虚拟对象控制装置1120被配置为根据所检测的人脸动作控制在所述显示装置1250上显示虚拟对象。
作为示例,可以根据所检测的人脸动作控制改变在显示屏幕上显示的虚拟对象的状态。在此情况下,所述虚拟对象可以包括第一组对象,在初始状态下所述第一组对象已经显示在显示屏幕上并且可以包括一个或多个对象。在该示例中,根据所检测的人脸动作更新所述第一组对象中至少一个对象在显示屏幕上的显示。所述第一组对象中至少一部分对象的初始显示位置和/或初始显示形态是预先确定的或随机确定的。具体地,例如可以改变所述虚拟对象的运动状态、显示位置、尺寸大小、形状、颜色等。
可选地,可以根据所检测的人脸动作控制在显示屏幕上显示新的虚拟对象。在此情况下,所述虚拟对象还可以包括第二组对象,在初始状态下所述第二组对象尚未显示在显示屏幕上并且可以包括一个或多个对象。在该示例中,根据所检测的人脸动作显示所述第二组对象中至少一个对象。所述第二组对象的所述至少一个对象中至少一部分对象的初始显示位置和/或初始显示形态是预先确定的或随机确定的。
如图14所示,所述虚拟对象控制装置1120可以包括人脸动作映射装置1410、以及虚拟对象呈现装置1420。
所述人脸动作映射装置1410根据所述人脸动作属性的值来更新所述虚拟对象的状态参量的值。
具体地,可以将一种人脸动作属性映射为虚拟对象的某一状态参量。例如,可以将用户眼睛睁闭程度或嘴巴张闭程度映射为虚拟对象的尺寸,并且根据用户眼睛睁闭程度或嘴巴张闭程度的值来更新虚拟对象的尺寸大小。再例如,可以将用户人脸俯仰程度映射为虚拟对象在显示屏幕上的垂直显示位置,并且根据用户人脸俯仰程度的值来更新虚拟对象在显示屏幕上的垂直显示位置。可选地,人脸动作属性与虚拟对象的状态参量之间的映射关系可以是预先设定的。
例如,所述人脸动作属性可以包括至少一个动作属性,所述虚拟对象的 状态参量包括至少一个状态参量,所述虚拟对象可以包括至少一个虚拟对象。一个运动属性可以仅与一个状态参量对应,或者一个运动属性可以按照时间顺序依次与多个状态参量对应。
所述虚拟对象呈现装置1420按照更新后的所述虚拟对象的状态参量的值呈现所述虚拟对象。
具体地,所述虚拟对象呈现装置1420可以更新第一组对象中至少一个对象的显示。有利地,所述虚拟对象呈现装置1420还可以显示新的虚拟对象,即第二组对象中的虚拟对象。有利地,所述虚拟对象呈现装置1420还可以更新第二组对象中至少一个对象的显示。
所述活体判断装置1130被配置为判断所述虚拟对象是否满足预定条件,并且在判断所述虚拟对象满足预定条件的情况下,确定所述拍摄图像中的人脸为活体人脸。所述预定条件为与所述虚拟对象的形态和/或运动有关的条件,其中所述预定条件是预先确定的或随机产生的。
具体地,可以判断所述虚拟对象的形态是否满足与形态有关的条件,例如,所述虚拟对象的形态可以包括尺寸大小、形状、颜色等;可以判断所述虚拟对象的与运动有关的参量是否满足与运动有关的条件,例如,所述虚拟对象的与运动有关的参量可以包括位置、运动轨迹、运动速度、运动方向等,所述与运动有关的条件可以包括所述虚拟对象的预定显示位置、所述虚拟对象的预定运动轨迹、所述虚拟对象的显示位置需要避开的预定显示位置等。可以根据所述虚拟对象的实际运动轨迹判断所述虚拟对象是否完成了预定任务,所述预定任务可以例如包括按照预定运动轨迹移动、绕开障碍物移动等。
例如,在所述虚拟对象包括第一对象的情况下,所述预定条件可以被设定为:所述第一对象达到目标显示位置、所述第一对象达到目标显示尺寸、所述第一对象达到目标形状、以及/或者所述第一对象达到目标显示颜色等等。
可选地,所述第一组对象还包括第二对象,所述第一对象和所述第二对象中至少一个的初始显示位置和/或初始显示形态是预先确定的或随机确定的。作为示例,所述第一对象可以为被控对象,所述第二对象可以被背景对象,可选地,所述第二对象可以作为所述第一对象的目标对象,并且所述预定条件可以被设定为:所述第一对象与所述目标对象重叠。替换地,所述背景对象可以为所述第一对象的目标运动轨迹,所述目标运动轨迹可以是随机 产生的,所述预定条件可以被设定为:在所述第一对象的实际运动轨迹与所述目标运动轨迹相符。替换地,所述背景对象可以为障碍对象,所述障碍对象可以是随机显示的,其显示位置和显示时间都是随机的,所述预定条件可以被设定为:所述第一对象不与所述障碍对象相遇,即所述第一对象绕开所述障碍对象。
再例如,在所述虚拟对象还包括第二组对象且所述第二组对象包括作为被控对象的第三对象的情况下,所述预定条件还可以设定为:所述第一和/或第三对象达到相应的目标显示位置、所述第一和/或第三对象达到相应的目标显示尺寸、所述第一和/或第三对象达到相应的目标形状、以及/或者所述第一和/或第三对象达到相应的目标显示颜色等等。
再例如,在所述虚拟对象包括第一对象和第二对象的情况下,所述预定条件可以设定为:所述第一对象达到目标显示位置、所述第一对象达到目标显示尺寸、所述第一对象达到目标形状、以及/或者所述虚拟对象达到目标显示颜色等等,以及所述第二对象达到目标显示位置、所述第二对象达到目标显示尺寸、所述第二对象达到目标形状、以及/或者所述第二对象达到目标显示颜色等等。
所述人脸动作映射装置1410以及所述虚拟对象呈现装置1420可以执行上述第一到第五实施例中的各种操作,在此不再赘述。
此外,根据本公开实施例的活体检测装置1100和1200还可以包括定时器,用于对预定定时时间进行计时。所述定时器也可以由处理器102实现。可以根据用户输入初始化定时器,或者可以在拍摄图像中检测到人脸时自动初始化定时器,或者可以在拍摄图像中检测到人脸预定动作时自动初始化定时器。在此情况下,所述活体判断装置1130被配置为判断在所述预定定时时间内所述虚拟对象是否满足预定条件,并且在判断在所述预定定时时间内所述虚拟对象满足预定条件的情况下,确定所述拍摄图像中的人脸为活体人脸。
所述存储装置1260用于存储所述拍摄图像。此外,所述存储装置1260还用于存储所述虚拟对象的状态参量及状态参量值。此外,所述存储装置1260还用于存储所述虚拟对象呈现装置1420所呈现的虚拟对象并且存储要在显示装置1250上显示的背景图像等。
此外,所述存储装置1260可以存储计算机程序指令,所述计算机程序指令在被所述处理器102运行时可以实现根据本公开实施例的活体检测方法, 并且/或者可以实现根据本公开实施例的活体检测设备中的关键点定位装置1310、纹理信息提取装置1320、以及动作属性确定装置1330。
此外,根据本公开实施例,还提供了一种计算机程序产品,其包括计算机可读存储介质,在所述计算机可读存储介质上存储了计算机程序指令。所述计算机程序指令在被计算机运行时可以实现根据本公开实施例的活体检测方法,并且/或者可以实现根据本公开实施例的活体检测设备中的关键点定位装置、纹理信息提取装置、以及动作属性确定装置的全部或部分功能。
根据本公开实施例的活体检测方法及设备、以及计算机程序产品,通过基于人脸动作控制虚拟对象显示并根据虚拟对象显示进行活体检测,可以不依赖于特殊的硬件设备来有效地防范照片、视频、3D人脸模型或者面具等多种方式的攻击,从而可以降低活体检测的成本。更进一步,通过识别人脸动作中的多个动作属性,可以控制虚拟对象的多个状态参量,可以使得所述虚拟对象在多个方面改变显示状态,例如使得所述虚拟对象执行复杂的预定动作、或者使得所述虚拟对象实现与初始显示效果有很大不同的显示效果。因此,可以进一步提高活体检测的准确度,并且进而可以提高应用根据本发明实施例的活体检测方法及设备、以及计算机程序产品的应用场景的安全性。
所述计算机可读存储介质可以是一个或多个计算机可读存储介质的任意组合。所述计算机可读存储介质例如可以包括智能电话的存储卡、平板电脑的存储部件、个人计算机的硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器、或者上述存储介质的任意组合。
在上面详细描述的本发明的示例实施例仅仅是说明性的,而不是限制性的。本领域技术人员应该理解,在不脱离本发明的原理和精神的情况下,可对这些实施例进行各种修改,组合或子组合,并且这样的修改应落入本发明的范围内。

Claims (20)

  1. 一种活体检测方法,包括:
    从拍摄图像中检测人脸动作;
    根据所检测的人脸动作控制在显示屏幕上显示虚拟对象;以及
    在所述虚拟对象满足预定条件的情况下,确定所述拍摄图像中的人脸为活体人脸。
  2. 如权利要求1所述的活体检测方法,还包括:
    实时地采集预定拍摄范围的第一图像作为所述拍摄图像;
    其中,所述活体检测方法还包括:在所述虚拟对象不满足预定条件的情况下,实时地采集所述预定拍摄范围的第二图像作为所述拍摄图像。
  3. 如权利要求1所述的活体检测方法,其中,所述预定条件为与所述虚拟对象的形态和/或运动有关的条件,其中所述预定条件是预先确定的或随机产生的。
  4. 如权利要求1所述的活体检测方法,其中,所述虚拟对象包括第一组对象,所述第一组对象已经显示在显示屏幕上并且包括一个或多个对象,
    其中,根据所检测的人脸动作控制在显示屏幕上显示虚拟对象包括:根据所检测的人脸动作更新所述第一组对象中至少一个对象在显示屏幕上的显示,其中,所述第一组对象中的所述至少一个对象为被控对象,
    其中,所述第一组对象中至少一部分对象的初始显示位置和/或初始显示形态是预先确定的或随机确定的。
  5. 如权利要求1所述的活体检测方法,其中,所述虚拟对象包括第二组对象,所述第二组对象尚未显示在显示屏幕上并且包括一个或多个对象,
    其中,根据所检测的人脸动作控制在显示屏幕上显示虚拟对象还包括:根据所检测的人脸动作显示所述第二组对象中至少一个对象的至少一部分,
    其中,所述第二组对象的所述至少一个对象中至少一部分对象的初始显示位置和/或初始显示形态是预先确定的或随机确定的。
  6. 如权利要求1所述的活体检测方法,其中,在预定时间内所述虚拟对象满足预定条件的情况下,确定所述拍摄图像中的人脸为活体人脸。
  7. 如权利要求1所述的活体检测方法,其中,从拍摄图像中检测人脸动作包括:
    在所述拍摄图像中定位人脸关键点,以及/或者从所述拍摄图像中提取图像纹理信息;以及
    基于所定位的人脸关键点和/或所提取的图像纹理信息,获得人脸动作属性的值。
  8. 如权利要求7所述的活体检测方法,其中,根据所检测的人脸动作控制在显示屏幕上显示虚拟对象包括:
    根据所检测的人脸动作的人脸动作属性的值来更新所述虚拟对象的状态参量的值;以及
    按照更新后的所述虚拟对象的状态参量的值,在所述显示屏幕上显示所述虚拟对象。
  9. 如权利要求7或8所述的活体检测方法,其中,所述人脸动作属性包括以下至少一项:眼睛睁闭程度、嘴巴张闭程度、人脸俯仰程度、人脸偏转程度、人脸与摄像头的距离、眼球左右转动程度、眼球上下转动程度。
  10. 一种活体检测设备,包括:
    一个或多个处理器;
    一个或多个存储器;以及
    存储在所述存储器中的计算机程序指令,在所述计算机程序指令被所述处理器运行时执行以下步骤:从拍摄图像中检测人脸动作;根据所检测的人脸动作控制在显示装置上显示虚拟对象;以及在所述虚拟对象满足预定条件的情况下,确定所述拍摄图像中的人脸为活体人脸。
  11. 如权利要求10所述的活体检测设备,还包括:
    图像采集装置,用于实时地采集预定拍摄范围的第一图像作为所述拍摄图像;以及
    所述显示装置,
    其中,所述图像采集装置还在所述虚拟对象不满足预定条件的情况下,实时地采集所述预定拍摄范围的第二图像作为所述拍摄图像。
  12. 如权利要求10所述的活体检测设备,其中,所述预定条件为与所述虚拟对象的形态和/或运动有关的条件,并且所述预定条件是预先确定的或随机产生的。
  13. 如权利要求10所述的活体检测设备,其中,所述虚拟对象包括第一组对象,所述第一组对象已经显示在显示装置上并且包括一个或多个对象,
    其中,根据所检测的人脸动作控制在显示装置上显示虚拟对象包括:根据所检测的人脸动作更新所述第一组对象中至少一个对象在显示屏幕上的显示,其中,所述第一组对象中的所述至少一个对象为被控对象,
    其中,所述第一组对象中至少一部分对象的初始显示位置和/或初始显示形态是预先确定的或随机确定的。
  14. 如权利要求13所述的活体检测设备,其中,所述虚拟对象还包括第二组对象,所述第二组对象尚未显示在显示装置上并且包括一个或多个对象,
    其中,根据所检测的人脸动作控制在显示装置上显示虚拟对象还包括:根据所检测的人脸动作显示所述第二组对象中至少一个对象的至少一部分,
    其中,所述第二组对象的所述至少一个对象中至少一部分对象的初始显示位置和/或初始显示形态是预先确定的或随机确定的。
  15. 如权利要求13所述的活体检测设备,其中,在所述计算机程序指令被所述处理器运行时执行以下步骤:初始化定时器;
    其中,在所述虚拟对象满足预定条件的情况下确定所述拍摄图像中的人脸为活体人脸包括:在所述定时器未超出预定定时时间时所述虚拟对象满足预定条件的情况下,确定所述拍摄图像中的人脸为活体人脸。
  16. 如权利要求13所述的活体检测设备,其中,从拍摄图像中检测人脸动作包括:
    在所述拍摄图像中定位人脸关键点,以及/或者从所述拍摄图像中提取图像纹理信息;以及
    基于所定位的人脸关键点和/或所提取的图像纹理信息,获得人脸动作属性的值,其中,所述人脸动作属性包括至少一个动作属性。
  17. 如权利要求16所述的活体检测设备,其中,根据所检测的人脸动作控制在显示装置上显示虚拟对象包括:
    根据所检测的人脸动作的人脸动作属性的值来更新所述虚拟对象的状态参量的值;以及
    按照更新后的所述虚拟对象的状态参量的值,在所述显示装置上显示所述虚拟对象。
  18. 一种计算机程序产品,包括一个或多个计算机可读存储介质,所述计算机可读存储介质上存储了计算机程序指令,所述计算机程序指令在被计算机运行时执行以下步骤:
    从拍摄图像中检测人脸动作;
    根据所检测的人脸动作控制在显示装置上显示虚拟对象;以及
    在所述虚拟对象满足预定条件的情况下,确定所述拍摄图像中的人脸为活体人脸。
  19. 如权利要求18所述的计算机程序产品,其中,所述预定条件为与所述虚拟对象的形态和/或运动有关的条件,并且所述预定条件是预先确定的或随机产生的。
  20. 如权利要求18所述的计算机程序产品,其中,所检测的人脸动作由人脸动作属性的值来表示,其中,所述人脸动作属性包括至少一个动作属性,
    其中,根据所检测的人脸动作控制在显示屏幕上显示虚拟对象包括:
    根据所述人脸动作属性的值来更新所述虚拟对象的状态参量的值;以及
    按照更新后的所述虚拟对象的状态参量的值,在所述显示屏幕上显示所述虚拟对象。
PCT/CN2015/082815 2015-06-30 2015-06-30 活体检测方法及设备、计算机程序产品 WO2017000213A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/738,500 US20180211096A1 (en) 2015-06-30 2015-06-30 Living-body detection method and device and computer program product
PCT/CN2015/082815 WO2017000213A1 (zh) 2015-06-30 2015-06-30 活体检测方法及设备、计算机程序产品
CN201580000356.8A CN105518582B (zh) 2015-06-30 2015-06-30 活体检测方法及设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/082815 WO2017000213A1 (zh) 2015-06-30 2015-06-30 活体检测方法及设备、计算机程序产品

Publications (1)

Publication Number Publication Date
WO2017000213A1 true WO2017000213A1 (zh) 2017-01-05

Family

ID=55725004

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/082815 WO2017000213A1 (zh) 2015-06-30 2015-06-30 活体检测方法及设备、计算机程序产品

Country Status (3)

Country Link
US (1) US20180211096A1 (zh)
CN (1) CN105518582B (zh)
WO (1) WO2017000213A1 (zh)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10872272B2 (en) * 2017-04-13 2020-12-22 L'oreal System and method using machine learning for iris tracking, measurement, and simulation
CN107274508A (zh) * 2017-07-26 2017-10-20 南京多伦科技股份有限公司 一种车载计时计程终端以及使用该终端的识别方法
CN107644679B (zh) * 2017-08-09 2022-03-01 深圳市欢太科技有限公司 信息推送方法和装置
CN108875508B (zh) * 2017-11-23 2021-06-29 北京旷视科技有限公司 活体检测算法更新方法、装置、客户端、服务器及系统
CN107911608A (zh) * 2017-11-30 2018-04-13 西安科锐盛创新科技有限公司 防闭眼拍摄的方法
CN108764052B (zh) * 2018-04-28 2020-09-11 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN108805047B (zh) * 2018-05-25 2021-06-25 北京旷视科技有限公司 一种活体检测方法、装置、电子设备和计算机可读介质
CN109271929B (zh) * 2018-09-14 2020-08-04 北京字节跳动网络技术有限公司 检测方法和装置
JPWO2020095350A1 (ja) * 2018-11-05 2021-09-24 日本電気株式会社 情報処理装置、情報処理方法及びプログラム
CN109886080A (zh) * 2018-12-29 2019-06-14 深圳云天励飞技术有限公司 人脸活体检测方法、装置、电子设备及可读存储介质
CN111435546A (zh) * 2019-01-15 2020-07-21 北京字节跳动网络技术有限公司 模型动作方法、装置、带屏音箱、电子设备及存储介质
CN113646806A (zh) * 2019-03-22 2021-11-12 日本电气株式会社 图像处理设备、图像处理方法和存储程序的记录介质
CN110287900B (zh) * 2019-06-27 2023-08-01 深圳市商汤科技有限公司 验证方法和验证装置
CN110321872B (zh) * 2019-07-11 2021-03-16 京东方科技集团股份有限公司 人脸表情识别方法及装置、计算机设备、可读存储介质
CN110716641B (zh) * 2019-08-28 2021-07-23 北京市商汤科技开发有限公司 交互方法、装置、设备以及存储介质
WO2021118048A1 (en) * 2019-12-10 2021-06-17 Samsung Electronics Co., Ltd. Electronic device and controlling method thereof
CN111126347B (zh) * 2020-01-06 2024-02-20 腾讯科技(深圳)有限公司 人眼状态识别方法、装置、终端及可读存储介质
CN113052120B (zh) * 2021-04-08 2021-12-24 深圳市华途数字技术有限公司 一种戴口罩人脸识别的门禁设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090135188A1 (en) * 2007-11-26 2009-05-28 Tsinghua University Method and system of live detection based on physiological motion on human face
CN201845368U (zh) * 2010-09-21 2011-05-25 北京海鑫智圣技术有限公司 具有活体检测功能的人脸指纹门禁
CN102201061A (zh) * 2011-06-24 2011-09-28 常州锐驰电子科技有限公司 基于多阶层过滤人脸识别的智能安全监控系统及方法

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100706871B1 (ko) * 2005-08-22 2007-04-12 주식회사 아이디테크 감시영상에서 얼굴의 진위 여부를 구별하는 방법
KR100851981B1 (ko) * 2007-02-14 2008-08-12 삼성전자주식회사 비디오 영상에서 실 객체 판별 방법 및 장치
JP5087532B2 (ja) * 2008-12-05 2012-12-05 ソニーモバイルコミュニケーションズ株式会社 端末装置、表示制御方法および表示制御プログラム
CN106961621A (zh) * 2011-12-29 2017-07-18 英特尔公司 使用化身的通信
CN104170358B (zh) * 2012-04-09 2016-05-11 英特尔公司 用于化身管理和选择的系统和方法
CN103513753B (zh) * 2012-06-18 2017-06-27 联想(北京)有限公司 信息处理方法和电子设备
JP6283168B2 (ja) * 2013-02-27 2018-02-21 任天堂株式会社 情報保持媒体および情報処理システム
CN104166835A (zh) * 2013-05-17 2014-11-26 诺基亚公司 用于识别活体用户的方法和装置
CN103440479B (zh) * 2013-08-29 2016-12-28 湖北微模式科技发展有限公司 一种活体人脸检测方法与系统
CN104391567B (zh) * 2014-09-30 2017-10-31 深圳市魔眼科技有限公司 一种基于人眼跟踪的三维全息虚拟物体显示控制方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090135188A1 (en) * 2007-11-26 2009-05-28 Tsinghua University Method and system of live detection based on physiological motion on human face
CN201845368U (zh) * 2010-09-21 2011-05-25 北京海鑫智圣技术有限公司 具有活体检测功能的人脸指纹门禁
CN102201061A (zh) * 2011-06-24 2011-09-28 常州锐驰电子科技有限公司 基于多阶层过滤人脸识别的智能安全监控系统及方法

Also Published As

Publication number Publication date
CN105518582B (zh) 2018-02-02
US20180211096A1 (en) 2018-07-26
CN105518582A (zh) 2016-04-20

Similar Documents

Publication Publication Date Title
WO2017000213A1 (zh) 活体检测方法及设备、计算机程序产品
WO2017000218A1 (zh) 活体检测方法及设备、计算机程序产品
US10339402B2 (en) Method and apparatus for liveness detection
US10546183B2 (en) Liveness detection
TWI751161B (zh) 終端設備、智慧型手機、基於臉部識別的認證方法和系統
US9985963B2 (en) Method and system for authenticating liveness face, and computer program product thereof
CN105612533B (zh) 活体检测方法、活体检测系统以及计算机程序产品
EP3332403B1 (en) Liveness detection
CN105184246B (zh) 活体检测方法和活体检测系统
US20190138791A1 (en) Key point positioning method, terminal, and computer storage medium
CN107209849B (zh) 眼睛跟踪
JP2022071195A (ja) コンピューティング装置及び方法
WO2016172872A1 (zh) 用于验证活体人脸的方法、设备和计算机程序产品
CN108875468B (zh) 活体检测方法、活体检测系统以及存储介质
US10254831B2 (en) System and method for detecting a gaze of a viewer
WO2017000217A1 (zh) 活体检测方法及设备、计算机程序产品
CN112257696B (zh) 视线估计方法及计算设备
WO2018103416A1 (zh) 用于人脸图像的检测方法和装置
TWI498857B (zh) 瞌睡提醒裝置
WO2020172870A1 (zh) 一种目标对象的移动轨迹确定方法和装置
US20230306792A1 (en) Spoof Detection Based on Challenge Response Analysis
JP4659722B2 (ja) 人体特定領域抽出・判定装置、人体特定領域抽出・判定方法、人体特定領域抽出・判定プログラム
TWI466070B (zh) 眼睛搜尋方法及使用該方法的眼睛狀態檢測裝置與眼睛搜尋裝置
US11507646B1 (en) User authentication using video analysis
WO2020133405A1 (zh) 一种地面遥控机器人的控制方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15896744

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15738500

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15896744

Country of ref document: EP

Kind code of ref document: A1