WO2017000217A1 - Procédé et dispositif de détection de corps vivant et produit programme d'ordinateur - Google Patents

Procédé et dispositif de détection de corps vivant et produit programme d'ordinateur Download PDF

Info

Publication number
WO2017000217A1
WO2017000217A1 PCT/CN2015/082828 CN2015082828W WO2017000217A1 WO 2017000217 A1 WO2017000217 A1 WO 2017000217A1 CN 2015082828 W CN2015082828 W CN 2015082828W WO 2017000217 A1 WO2017000217 A1 WO 2017000217A1
Authority
WO
WIPO (PCT)
Prior art keywords
objects
virtual objects
controlled
face
display
Prior art date
Application number
PCT/CN2015/082828
Other languages
English (en)
Chinese (zh)
Inventor
曹志敏
陈可卿
贾开
Original Assignee
北京旷视科技有限公司
北京小孔科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京旷视科技有限公司, 北京小孔科技有限公司 filed Critical 北京旷视科技有限公司
Priority to CN201580000358.7A priority Critical patent/CN105518715A/zh
Priority to PCT/CN2015/082828 priority patent/WO2017000217A1/fr
Publication of WO2017000217A1 publication Critical patent/WO2017000217A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • the present disclosure relates to the field of face recognition technology, and more particularly to a living body detection method and apparatus, and a computer program product.
  • face recognition systems are increasingly used in online scenarios requiring authentication in security, finance, and social security fields, such as online bank account opening, online transaction operation verification, unattended access control systems, and online social security. Online medical insurance, etc.
  • face recognition systems in addition to ensuring that the verifier's face similarity matches the database stored in the database, it is first necessary to verify that the verifier is a legitimate biological living organism. That is to say, the face recognition system needs to be able to prevent an attacker from using a photo, a video, a 3D face model, or a mask to attack.
  • Embodiments of the present disclosure provide a living body detecting method and apparatus, and a computer program product capable of controlling virtual object display in stages based on a face action, wherein at least a part of the controlled object in the virtual object sequentially and the target object in the virtual object The case where at least a part of the coincidence is determined to confirm the successful detection of the living body.
  • a living body detecting method includes: detecting a face motion from a captured image; based on a display state of the first group of virtual objects currently displayed on the display screen, and the detected face motion Controlling display of the controlled object in the currently displayed first set of virtual objects and controlling display of the second set of virtual objects; and controlling the objects in the first set of virtual objects and the second set of virtual objects In a case where at least a portion coincides with at least a part of the target object among the first group of virtual objects and the second group of virtual objects, the face in the captured image is determined to be a living face.
  • a living body detecting apparatus including: a face motion detecting device configured to detect a face motion from a captured image; and a virtual object control device configured to be based on the display device a display state of the currently displayed first group of virtual objects and the detected face motion, controlling display of the controlled object in the currently displayed first group of virtual objects and controlling display of the second group of virtual objects; and living body determining device Configuring at least a portion of the controlled object in the first set of virtual objects and the second set of virtual objects to be in turn with at least a portion of the target objects of the first set of virtual objects and the second set of virtual objects In the case of coincidence, it is determined that the face in the captured image is a living human face.
  • a living body detecting apparatus includes: one or more processors; one or more memories; computer program instructions stored in the memory, in which the computer program instructions are provided Performing the following steps when the processor is running: detecting a face motion from the captured image; controlling the currently displayed first based on a display state of the first set of virtual objects currently displayed on the display device and the detected face motion Displaying and controlling display of a second set of virtual objects in a set of virtual objects; and at least a portion of the controlled objects in the first set of virtual objects and the second set of virtual objects sequentially with the first In a case where the group virtual object and at least a part of the target object in the second group of virtual objects overlap, it is determined that the face in the captured image is a living human face.
  • a computer program product comprising one or more computer readable storage media having stored thereon computer program instructions, the computer program instructions being The computer runs the following steps: detecting a face motion from the captured image; controlling the currently displayed first group of virtual objects based on a display state of the first set of virtual objects currently displayed on the display screen and the detected face motion Displaying and controlling the display of the second set of virtual objects; at least a portion of the controlled objects in the first set of virtual objects and the second set of virtual objects are sequentially associated with the first set of virtual objects and In a case where at least a part of the target objects in the second group of virtual objects overlap, it is determined that the face in the captured image is a living human face.
  • the living body detecting method and apparatus and the computer program product of the embodiments of the present disclosure by controlling the virtual object display based on the face motion and performing the living body detection according to the virtual object display, the photo and video can be effectively prevented without depending on the special hardware device. Attacks in various ways, such as 3D face models or masks, can reduce the cost of living body detection. Further, by identifying a plurality of action attributes in the face action, a plurality of state variables of the virtual object can be controlled, and the virtual state can be made The pseudo object changes the display state in a plurality of aspects, for example, causing the virtual object to perform a complex predetermined action, or causing the virtual object to achieve a display effect that is greatly different from the initial display effect. Therefore, the accuracy of the living body detection can be further improved, and further, the safety of applying the living body detecting method and apparatus according to the embodiment of the present invention and the application scenario of the computer program product can be improved.
  • FIG. 1 is a schematic block diagram of an electronic device for implementing a living body detecting method and apparatus of an embodiment of the present disclosure
  • FIG. 2 is a schematic flow chart of a living body detecting method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of a face motion detecting step in a living body detecting method according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flowchart of a virtual object display control step in a living body detecting method according to an embodiment of the present disclosure
  • FIG. 5 is another schematic flowchart of a living body detecting method according to an embodiment of the present disclosure.
  • FIG. 6, FIG. 7, and FIG. 8 are examples of virtual objects displayed on a display screen according to the first embodiment of the present disclosure
  • FIG. 9 is another schematic flowchart of a living body detecting method according to an embodiment of the present disclosure.
  • 10A and 10B are examples of virtual objects displayed on a display screen according to a second embodiment of the present disclosure.
  • FIG. 11 is a schematic block diagram of a living body detecting apparatus according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic block diagram of another living body detecting apparatus according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic block diagram of a face motion detecting device in a living body detecting apparatus according to an embodiment of the present disclosure
  • FIG. 14 is a schematic block diagram of a virtual object control device in a living body detecting device according to an embodiment of the present disclosure.
  • electronic device 100 includes one or more processors 102, one or more storage devices 104, output devices 108, and image acquisition devices 110 that pass through bus system 112 and/or other forms of connection mechanisms. (not shown) interconnected. It should be noted that the components and structures of the electronic device 100 illustrated in FIG. 1 are merely exemplary and not limiting, and the electronic device 100 may have other components and structures as needed.
  • the processor 102 can be a central processing unit (CPU) or other form of processing unit with data processing capabilities and/or instruction execution capabilities, and can control other components in the electronic device 100 to perform desired functions.
  • CPU central processing unit
  • the processor 102 can be a central processing unit (CPU) or other form of processing unit with data processing capabilities and/or instruction execution capabilities, and can control other components in the electronic device 100 to perform desired functions.
  • the storage device 104 can include one or more computer program products, which can include various forms of computer readable storage media, such as volatile memory and/or nonvolatile memory.
  • the volatile memory may include, for example, a random access memory (RAM) and/or a cache or the like.
  • the nonvolatile memory may include, for example, a read only memory (ROM), a hard disk, a flash memory, or the like.
  • One or more computer program instructions can be stored on the computer readable storage medium, and the processor 102 can execute the program instructions to implement the functions (implemented by the processor) of the embodiments of the invention described below and/or Or other desired features.
  • Various applications and various data may also be stored in the computer readable storage medium, such as image data collected by the image capture device 110, and the like, and various data used and/or generated by the application.
  • the output device 108 may output various information (eg, images or sounds) to the outside (eg, a user), and may include one or more of a display, a speaker, and the like.
  • the image capture device 110 may take an image of a predetermined viewing range (eg, photos, videos, etc.) and store the captured images in the storage device 104 for use by other components.
  • a predetermined viewing range eg, photos, videos, etc.
  • the exemplary electronic device 100 for implementing the living body detecting method and apparatus of the embodiments of the present disclosure may be an electronic device integrated with a face image collecting device disposed at a face image collecting end, such as a smartphone, a tablet, an individual.
  • a face image collecting device disposed at a face image collecting end
  • the electronic device 100 can be deployed at an image acquisition end of an access control system, and can be, for example, a face recognition based identification device; in the field of financial applications, it can be deployed at a personal terminal, such as a smart phone. , tablets, personal computers, etc.
  • the output device 108 and the image capture device 110 of the exemplary electronic device 100 for implementing the living body detecting method and apparatus of the embodiments of the present disclosure may be deployed at a face image collecting end, and the processor in the electronic device 100 102 can be deployed on the server side (or cloud).
  • a face motion is detected from the captured image.
  • the other image capturing device of the image captures a grayscale or color image of a predetermined shooting range as a captured image, which may be a photo or a frame in the video.
  • the image capture device may be a camera of a smart phone, a camera of a tablet, a camera of a personal computer, or even a webcam.
  • step S210 The face motion detection in step S210 is described with reference to FIG.
  • a face key point is located in the captured image.
  • it may first be determined whether a captured face is included in the acquired image, and a face key point is located in the case where the face is detected.
  • the key points of the face are some key points on the face, such as the eyes, the corners of the eyes, the center of the eyes, the eyebrows, the highest point of the cheekbones, the nose, the tip of the nose, the nose, the mouth, the corners of the mouth, and the contour points of the face.
  • the series of face key points may include, but is not limited to, at least a portion of the above-described face key points.
  • machine learning algorithms such as Deep Learning or local feature-based regression algorithm
  • face detection and face key point positioning may be performed in the captured image based on the already established face key point model in step S310.
  • the position of the face key point can be iteratively optimized in the captured image, and finally the coordinate position of each face key point is obtained.
  • a method based on cascade regression can be used to locate a face key in a captured image.
  • the positioning of face key points plays an important role in face motion recognition, however, it should be understood that the present disclosure is not limited by the specific face key point positioning method.
  • the face key point positioning in step S310 can be performed using an existing face detection and face key point localization algorithm.
  • the living body detecting method 100 of the embodiment of the present disclosure is not limited to the use of the existing face detection and face key point positioning algorithms for face key point positioning, and should cover the use of face detection and face key developed in the future. Point location algorithm for face key location.
  • image texture information is extracted from the captured image.
  • fine information of a face such as eyeball position information, mouth type information, micro-expression information, and the like, may be extracted according to pixel information in the captured image, such as brightness information of a pixel.
  • the image texture information extraction in step S320 can be performed using an existing image texture information extraction algorithm. It should be understood that the living body detecting method 100 of the embodiment of the present disclosure is not limited to performing image texture information extraction using an existing image texture information extraction algorithm, and should cover image texture information extraction using a future developed image texture information extraction algorithm.
  • steps S310 and S320 may alternatively be performed, or both may be performed. In the case where both of steps S310 and S320 are performed, they may be executed simultaneously or may be performed sequentially.
  • a value of the face action attribute is obtained based on the located face key point and/or the image texture information.
  • the facial motion attribute obtained based on the located face key points may include, for example, but is not limited to, degree of eye closure, degree of mouth opening, degree of face pitch, degree of face deflection, distance of face from camera, and the like.
  • the facial motion attribute obtained based on the image texture information may include, but is not limited to, a degree of left and right eye deflection, an eyeball vertical deflection degree, and the like.
  • the value of the face action attribute may be obtained based on the previous captured image of the current captured image and the current captured image; or the value of the face action attribute may be obtained based on the first captured image and the current captured image; Alternatively, the value of the face action attribute may be obtained based on the current captured image and the first few captured images of the currently captured image.
  • the value of the face action attribute may be obtained based on the located face key points by means of geometric learning, machine learning, or image processing.
  • geometric learning machine learning, or image processing.
  • a circle of eyes defines multiple key points, such as 8-20 key points, for example, the inner corner of the left eye, the outer corner of the eye, the center point of the upper eyelid and the center point of the lower eyelid, and the inner corner of the right eye, the outer corner of the eye, the upper eyelid Center point and lower eyelid center point.
  • the ratio of the inner and outer corner distances is taken as the first distance ratio X, and the degree of eye closure Y is determined based on the first distance ratio.
  • step S220 based on the display state of the first group of virtual objects currently displayed on the display screen and the detected face motion, controlling the display of the controlled object in the currently displayed first group of virtual objects and Control displays the second set of virtual objects.
  • the face action attribute may include at least one action attribute, and the state parameter of the virtual object may include at least one state parameter.
  • An action attribute may correspond to only one state parameter, or an action attribute may correspond to a plurality of state parameters in chronological order.
  • the mapping relationship between the face action attribute and the state parameter of the virtual object may be preset, or may be randomly determined when starting the living body detection method according to an embodiment of the present disclosure.
  • the living body detecting method according to an embodiment of the present disclosure may further include prompting a user with a mapping relationship between the face action attribute and a state parameter of the virtual object.
  • the state of the virtual object displayed on the display screen may be changed according to the detected face motion control. Updating the display of at least one of the first set of virtual objects on the display screen according to the detected face motion.
  • the initial display position and/or initial display form of at least a portion of the first set of virtual objects is predetermined or randomly determined. Specifically, for example, the motion state, display position, size, shape, color, and the like of the virtual object can be changed.
  • a new virtual object ie a second set of virtual objects
  • the displaying of the new virtual object, that is, the second group of virtual objects, on the display screen may be controlled according to the display condition of at least a part of the virtual objects in the first set of virtual objects.
  • An initial display position and/or an initial display form of at least a portion of the objects of the second set of objects is predetermined or randomly determined.
  • the virtual object can include a first set of objects, in accordance with an embodiment of the present disclosure.
  • the first set of objects is displayed on the display screen when the biometric detection method begins to be executed, and the display of at least one of the first set of objects may be updated by the first set of facial motion attributes.
  • the virtual object may further include a second group of objects, wherein the second group of objects are not displayed on the display screen when the living body detecting method according to the embodiment of the present disclosure is started, and may be performed by interacting with the first group of faces a second set of face action attributes having different attributes to control whether to display at least one of the second set of objects; or controlling whether to display at least one of the second set of objects according to a display condition of the first set of objects An object.
  • the state parameter of at least one of the first group of objects may be a display position, a size, a shape, a color, a motion state, and the like, thereby changing the value according to the value of the first group of face action attributes.
  • the state parameter of each of the at least one object of the second group of objects may include at least a visible state, and may further include a display position, a size, a shape, a color, a motion state, and the like. Controlling whether to display at least one of the second set of objects, that is, the second set of objects, according to a value of the second set of facial motion attributes or a display condition of at least one of the first set of objects Whether at least one object is in a visible state, and may further change at least one of the second set of objects according to a value of the second set of facial motion attributes and/or a value of the first set of facial motion attributes Motion status, display position, size, shape, color, etc.
  • the face action attribute includes at least a first action attribute.
  • step S410 the value of the state parameter of the controlled object in the first set of virtual objects is updated according to the value of the first action attribute.
  • a face action attribute can be mapped to a certain state parameter of the virtual object.
  • the user's eye degree of closure or degree of mouth opening may be mapped to the size of the virtual object, and the size of the virtual object may be updated according to the value of the user's degree of eye closure or degree of mouth opening.
  • the user's face pitch degree may be mapped to a vertical display position of the virtual object on the display screen, and the vertical display position of the virtual object on the display screen may be updated according to the value of the user's face pitch degree.
  • the ratio K1 of the degree of mouth opening in the current captured image and the degree of mouth opening in the previously captured first captured image may be calculated, and the ratio K1 of the degree of mouth opening is mapped to the size S of the virtual object.
  • the degree K2 in which the face position in the current captured image deviates from the initial center position can be calculated, and the face position is mapped to the position W of the virtual object.
  • step S420 the controlled object is displayed on the display screen according to the updated value of the state parameter of the controlled object.
  • step S430 the value of the state parameter attribute of the second group of virtual objects is updated according to the display state of the first group of virtual objects currently displayed on the display screen.
  • the face action attribute may further include a second action attribute.
  • the value of the state parameter attribute of the second set of virtual objects is updated according to the value of the second action attribute.
  • step S440 the second set of virtual objects are displayed on the display screen according to the updated values of the state variables of the second set of virtual objects.
  • Step S430 may be performed simultaneously with step S410 or sequentially, and step S440 may be performed simultaneously with step S420 or sequentially.
  • step S230 determining whether at least a part of the controlled objects in the first group of virtual objects and the second group of virtual objects are sequentially in the first group of virtual objects and the second group of virtual objects At least a portion of the target objects coincide.
  • the coincidence of the controlled object with the target object may include: position coincidence, position coincidence and the same size, position coincidence and shape are identical, the positions are coincident, and the colors are completely the same.
  • step S240 it is determined in step S240 that the face in the captured image is a living face.
  • displaying various face motion parameters as state control parameters of the virtual object by displaying various face motion parameters as state control parameters of the virtual object, displaying the virtual object on the display screen according to the face motion control may be based on whether the displayed controlled object and the target are displayed.
  • the objects overlap to perform a living body test.
  • the virtual object includes a first group of objects and a second group of objects
  • the virtual object currently displayed on the display screen is a first group of objects, which are not currently displayed on the display screen and are based on the first
  • the virtual object displayed by the display of at least one of the set of objects is the second set of objects.
  • the first set of objects includes at least two objects and the second set of objects includes at least one object.
  • an initial display of at least a part of the first group of objects and the second group of objects The position and/or initial display form is predetermined or randomly determined.
  • the first state parameter of each object in the first group of objects is a display position of the object
  • the first and second state parameters of each object in the second group of objects are respectively The display position and visual state of the object.
  • the first group of objects includes a first subset of objects and a second subset of objects
  • the second set of objects includes a third subset of objects
  • the first subset of objects and the third subset of objects are The controlled object
  • the second subset of objects is the target object.
  • the number of controlled objects may be preset, and when a predetermined number of controlled objects are sequentially coincident with the target object, it is determined that the living human face is detected.
  • the first group of objects includes a first subset of objects and a second subset of objects
  • the second set of objects includes a third subset of objects
  • the first subset of objects is a controlled object
  • the second subgroup object and the third subgroup object are target objects.
  • the number of target objects may be set in advance, and when the controlled objects are sequentially coincident with a predetermined number of target objects, it is determined that the living human face is detected.
  • the first group of objects includes a first subset of objects and a second subset of objects
  • the second set of objects includes a third subset of objects and a fourth subset of objects
  • the first subset of objects and The third subgroup object is a controlled object
  • the second subgroup object and the fourth subgroup object are target objects.
  • the number of the first subset of objects and the second subset of objects, and the number of the third subset of objects and the fourth subset of objects may be preset.
  • Object pairs can be defined, each object pair including a controlled object and a target object.
  • the number of object pairs may be predefined, and when the controlled object in the predetermined number of object pairs coincides with the target object, it is determined that the living face is detected.
  • FIG. 5 illustrates an exemplary flow chart of a living body detection method 500 in accordance with an embodiment of the present disclosure.
  • a timer is initialized.
  • the timer may be initialized according to user input, or the timer may be automatically initialized when a face is detected in the captured image, or may be automatically initialized when a predetermined action of the face is detected in the captured image. Further, at least a portion of each of the first set of objects is displayed on the display screen after the timer is initialized.
  • step S520 an image (first image) of a predetermined shooting range is acquired in real time as a captured image.
  • the other image capturing device of the image captures a grayscale or color image of a predetermined shooting range as a captured image, which may be a photo or a frame in the video.
  • Step S530 corresponds to step S210 in FIG. 2, and details are not described herein again.
  • step S540 the display of the controlled object in the currently displayed first group of virtual objects is controlled based on the detected face motion, and the second group of virtual objects is displayed based on the display state of the first group of virtual objects.
  • step S550 Determining, in step S550, whether at least a portion of the controlled objects of the first group of virtual objects and the second group of virtual objects coincide with at least a portion of the target objects of the first group of virtual objects and the second group of virtual objects in a predetermined timing time.
  • the predetermined timing time may be predetermined.
  • the step S550 may include determining whether the timer exceeds a predetermined timing time and whether the controlled object sequentially coincides with the target object.
  • a timeout flag may be generated when the timer exceeds the predetermined timing time, and whether the timer exceeds the predetermined timing time may be determined according to the timeout flag in step S550.
  • step S570 determines that the living human face is not detected.
  • step S560 determines that the living human face is not detected.
  • the image (second image) of the predetermined shooting range is acquired as a captured image in real time, and steps S530-S550 are next performed.
  • the image acquired first is referred to as a first image
  • the image acquired thereafter is referred to as a second image. It should be understood that the first image and the second image are images within the same viewing range, only the time of acquisition is different.
  • Steps S520-S550 shown in Fig. 5 are repeatedly performed until it is determined in step S560 that the living face is detected, or until it is determined in step S570 that the living face is not detected.
  • Figure 6 shows an example of a first set of objects and a second set of objects.
  • the number of controlled objects is set to 1 in advance
  • the number of target objects is set to 3 in advance.
  • the first group of objects includes a first object A and a second object B1, the first object A is a controlled object, and the second object B1 is a background object.
  • the background object is the target object.
  • a third object B2 and a fourth object B3 are also shown in FIG. 6, which are sequentially displayed as the second group of objects and are both background objects, the background pair Like the target object. Specifically, when the first object A and the second object B1 are coincident, the third object B2 is displayed as the second group of objects; when the first object A and the third object B2 are coincident, the fourth object B3 is displayed as the The second set of objects.
  • the face action attribute includes a first action attribute
  • the state parameter of the first object A includes a first state parameter of the first object A
  • the state parameter of the second object B1 includes the second object B1
  • the first state parameter of the third object B2 includes a first state parameter of the third object B2
  • the state parameter of the fourth object B3 includes a first state parameter of the fourth object B3.
  • the value of the second state parameter of the third object B2 in the second group of objects is set to represent a visible value to display The third object B2 in the second group of objects.
  • the value of the first state parameter of the first object A may be updated according to the value of the first action attribute, and the value of the first state parameter of the updated first object A is in the The first object A is displayed on the display screen.
  • the face action attribute may further include a second action attribute different from the first action attribute, and may continue to update the first state parameter of the first object A according to the value of the second action attribute. And displaying the first object A on the display screen according to the updated value of the first state parameter of the first object A.
  • the value of the second state parameter of the fourth object B3 in the second group of objects is set to represent a visible value to display The fourth object B3 in the second group of objects.
  • the value of the first state parameter of the first object A may be updated according to the value of the first or second action attribute, and according to the updated first state parameter of the first object A.
  • the value displays the first object A on the display screen.
  • the face action attribute may further include a third action attribute different from the first and second action attributes, and may continue to update the first of the first object A according to the value of the third action attribute.
  • the living body detection is successful.
  • the predetermined timing time When an object A sequentially overlaps with the second object B1, the third object B2, and the fourth object B3, it is determined that the living body detection is successful.
  • step S550 it is determined in step S550 whether the timer exceeds the predetermined timing time, and it is determined whether the first object A is sequentially associated with the second object B1 and the third object B2. It coincides with the fourth object B3.
  • step S550 It is determined in step S550 that the timer exceeds the predetermined timing time and the first object A and the second object B1, the third object B2, and the fourth object B3 are not coincident, or are not associated with the third object B2 and the fourth object. In the case where none of the objects B3 overlaps or does not coincide with the fourth object B3, it is determined in step S570 that the living human face is not detected.
  • step S550 it is determined in step S550 that the timer does not exceed the predetermined timing time and the first object A sequentially coincides with the second object B1, the third object B2, and the fourth object B3, it is determined in step S560 that the living body is detected. human face.
  • step S550 it is determined in step S550 that the timer does not exceed the predetermined timing time and the first object A and the second object B1, the third object B2, and the fourth object B3 are not coincident, or are not related to the third
  • the process returns to step S520.
  • step S520 it is also possible to perform the steps of: determining whether the fourth object is displayed, and determining whether the display is performed if it is determined that the fourth object has not been displayed yet a third object, determining whether the first object coincides with the second object if it is determined that the third object has not been displayed, and displaying that the first object is coincident with the second object
  • the third object then returns to step S520; if it is determined that the fourth object has not been displayed but the third object is displayed, it is determined whether the first object coincides with the third object, and The fourth object is displayed in a case where it is determined that the first object coincides with the third object, and then returns to step S520.
  • the total number of target objects may be set and in the case where the first object A sequentially coincides with each target object, or in the case where the first object A sequentially coincides with a predetermined number of target objects, Or, in a case where the first object A sequentially coincides with at least a part of a predetermined number of target objects, it is determined that the living body detection is successful.
  • Figure 7 shows another example of a first set of objects and a second set of objects.
  • the number of controlled objects is set to 3 in advance
  • the number of target objects is set to 1 in advance.
  • the first group of objects includes a first object A1 and a second Object B
  • the first object A1 is a controlled object
  • the second object B is a background object
  • the background object is a target object.
  • a third object A2 and a fourth object A3 are also shown in FIG. 7, which are sequentially displayed as the second group of objects and are both controlled objects. Specifically, when the first object A1 and the second object B are coincident, the third object A2 is displayed as the second group object; when the third object A2 and the second object B are coincident, the fourth object A3 is displayed as the The second set of objects.
  • the face action attribute includes a first action attribute
  • the state parameter of the first object A1 includes a first state parameter of the first object A1
  • the state parameter of the second object B includes the second object B
  • the first state parameter, the state parameter of the third object A2 includes a first state parameter of the third object A2, and the state parameter of the fourth object A3 includes a first state parameter of the fourth object A3.
  • the value of the second state parameter of the third object A2 in the second group of objects is set to represent a visible value to display The third object A2 in the second group of objects.
  • the value of the first state parameter of the third object A2 may be updated according to the value of the first action attribute, and the value of the first state parameter of the updated third object A2 is
  • the third object A2 is displayed on the display screen, and the display position of the first object A1 remains unchanged.
  • the face action attribute may further include a second action attribute different from the first action attribute, and may continue to update the first state parameter of the third object A2 according to the value of the second action attribute. And displaying the third object A2 on the display screen according to the updated value of the first state parameter of the third object A2.
  • the value of the second state parameter of the fourth object A3 in the second group of objects is set to represent a visible value to display The fourth object A3 in the second group of objects.
  • the value of the first state parameter of the fourth object A3 may be updated according to the value of the first or second action attribute, and according to the updated first state parameter of the fourth object A3.
  • the value displays the fourth object A3 on the display screen, while the display positions of the first and second objects A1 and A2 remain unchanged.
  • the face action attribute may further include a third action attribute different from the first and second action attributes, And continuing to update the value of the first state parameter of the fourth object A3 according to the value of the third action attribute, and according to the updated value of the first state parameter of the fourth object A3 on the display screen The fourth object A3 is displayed.
  • the living body detection is successful.
  • the first object A1, the third object A2, and the fourth object A3 are sequentially and the second object B in a predetermined timing time, it is determined that the living body detection is successful.
  • step S550 it is judged in step S550 whether the timer exceeds the predetermined timing time, and the first object A1, the third object A2, and the fourth are judged. Whether the object A3 coincides with the second object B in order.
  • step S550 Determining, in step S550, that the timer exceeds the predetermined timing time and the first object A1 does not coincide with the second object B, or the third object A2 does not coincide with the second object B, or In the case where the fourth object A3 does not coincide with the second object B, it is determined in step S570 that the living human face is not detected.
  • Step S560 determines that a living human face is detected.
  • step S550 it is determined in step S550 that the timer does not exceed the predetermined timing time and the first object A1 does not coincide with the second object B, or the third object A2 does not coincide with the second object B. If the fourth object A3 does not overlap the second object B, the process returns to step S520.
  • step S520 it is also possible to perform the steps of: determining whether the fourth object is displayed, and determining whether the display is performed if it is determined that the fourth object has not been displayed yet a third object, determining whether the first object coincides with the second object if it is determined that the third object has not been displayed, and displaying that the first object is coincident with the second object
  • the third object then returns to step S520; if it is determined that the fourth object has not been displayed but the third object is displayed, it is determined whether the third object coincides with the second object, and The fourth object is displayed in a case where it is determined that the third object coincides with the second object, and then returns to step S520.
  • the total number of controlled objects may be set and in the case where the respective controlled objects are sequentially coincident with the target object, or in the case where a predetermined number of controlled objects are sequentially coincident with the target object, Or, in a case where at least a part of the predetermined number of controlled objects are sequentially coincident with the target object, it is determined that the living body detection is successful.
  • FIG. 8 shows an example of a first group of objects and a second group of objects.
  • the number of controlled objects is set to 3 in advance
  • the number of target objects is set to 3 in advance.
  • the first group of objects includes a first object A1 and a second object B1, the first object A1 is a controlled object, and the second object B1 is a background object.
  • the background object is the target object.
  • the third object A2 and the fourth object B2, and the fifth object A3 and the sixth object B3 are also shown in FIG. 8, and the third object A2 and the fifth object A3 are both controlled objects, the first The four objects B2 and the sixth object B3 are both background objects.
  • the third object A2 and the fourth object B2 are displayed as the second group object; when the third object A2 and the fourth object B2 are coincident, the fifth is displayed.
  • the object A3 and the sixth object B3 serve as the second group of objects.
  • the face action attribute includes a first action attribute. First, updating the value of the first state parameter of the first object A1 according to the value of the first action attribute, and according to the updated value of the first state parameter of the first object A1 on the display screen The first object A1 is displayed.
  • the third object A2 and the fourth object B2 of the second group of objects are displayed.
  • the value of the first state parameter of the third object A2 may be updated according to the value of the first action attribute, and the value of the first state parameter of the updated third object A2 is
  • the third object A2 is displayed on the display screen.
  • the face action attribute may further include a second action attribute different from the first action attribute, and may continue to update the first state parameter of the third object A2 according to the value of the second action attribute. And displaying the third object A2 on the display screen according to the updated value of the first state parameter of the third object A2.
  • the fifth object A3 of the second group of objects is displayed.
  • the value of the first state parameter of the fifth object A3 may be updated according to the value of the first or second action attribute, and according to the updated first state parameter of the fifth object A3.
  • the value displays the fifth object A3 on the display screen.
  • the face action attribute may further include a third action attribute different from the first and second action attributes, and may continue to update the first of the fifth object A3 according to the value of the third action attribute.
  • the value of the state parameter, and according to the updated first state parameter of the fifth object A3 The value displays the fifth object A3 on the display screen.
  • the living body detection is successful.
  • the first object A1, the third object A2, and the fifth object A3 are sequentially coincident with the second object B1, the fourth object B2, and the sixth object B3 within a predetermined time To confirm the success of the living body test.
  • step S550 it is judged in step S550 whether the timer exceeds the predetermined timing time, and it is determined that the first object A1, the third object A2, and the fifth object A3 Whether to coincide with the second object B1, the fourth object B2, and the sixth object B3 in sequence.
  • step S550 Determining in step S550 that the timer exceeds the predetermined timing time and the fifth object A3 does not coincide with the sixth object B3, or the third object A2 does not coincide with the fourth object B2, or the first object In the case where A1 does not coincide with the second object B1, it is determined in step S570 that the living face is not detected.
  • step S550 Determining in step S550 that the timer does not exceed the predetermined timing time and the first object A1, the third object A2, and the fifth object A3 are sequentially associated with the second object B1, the fourth object B2, and In the case where the sixth object B3 coincides, it is determined in step S560 that the living human face is detected.
  • step S550 it is determined in step S550 that the timer does not exceed the predetermined timing time and the fifth object A3 does not coincide with the sixth object B3, or the third object A2 does not coincide with the fourth object B2, or If the first object A1 does not overlap with the second object B1, the process returns to step S520.
  • step S520 it is also possible to perform the steps of: determining whether the fifth and sixth objects are displayed, in the case where it is determined that the fifth and sixth objects have not been displayed yet Determining whether the third and fourth objects are displayed, determining whether the first object coincides with the second object if it is determined that the third and fourth objects have not been displayed, and determining the first Displaying the third and fourth objects in the case where the object coincides with the second object, and then returning to step S520; determining that the fifth and sixth objects have not been displayed but displaying the third and fourth Determining whether the third object coincides with the fourth object in the case of an object, and displaying the fifth and sixth objects in the case of determining whether the third object coincides with the fourth object, and then The process returns to step S520.
  • the number of object pairs included in the second group of objects may be set, wherein the object A2 and the object B2 may be regarded as one object pair, and the each object Ai is sequentially coincident with its corresponding object Bi.
  • the living body detection is successful.
  • each of the objects Ai sequentially coincides with the corresponding object Bi in a predetermined time, it is determined that the living body detection is successful.
  • the horizontal position and the vertical position of the first object A and the second object B are different.
  • the first action attribute may include the first child.
  • the action attribute and the second sub-action attribute, the first state parameter of the first object A may include a first sub-state parameter and a second sub-state parameter, the value of the first sub-state parameter being the first object A a horizontal position coordinate, the value of the second sub-state parameter is a vertical position coordinate of the first object A, and the first object A may be updated on the display screen according to a value of the first sub-action attribute
  • the horizontal position coordinates on the upper side, and the vertical position coordinates of the first object A on the display screen are updated according to the value of the second sub-action attribute.
  • the first action attribute may be defined as a position of the face in the captured image, and the display of the first object A on the display screen may be updated according to position coordinates of the face in the captured image. position.
  • the first sub-action attribute may be defined as a horizontal position of the face in the captured image and the second sub-action attribute is defined as a vertical position of the face in the captured image, which may be shot according to the face Horizontal position coordinates in the image to update the horizontal position coordinates of the first object A on the display screen, and update the first object A on the display screen according to the vertical position coordinates of the face in the captured image Vertical position coordinates on.
  • the first sub-action attribute may be defined as a degree of facial deflection and the second sub-action attribute may be defined as a degree of facial pitch, and then the first object A may be updated according to the value of the degree of facial deflection.
  • the horizontal position coordinates on the display screen, and the vertical position coordinates of the first object A on the display screen are updated according to the value of the degree of face pitch.
  • the virtual object includes a first group of objects and a second group of objects
  • the virtual object currently displayed on the display screen is a first group of objects, which are not currently displayed on the display screen and are activated according to the face.
  • the displayed virtual object is the second set of objects.
  • the first set of objects includes at least two objects and the second set of objects includes at least one object.
  • an initial display position and/or an initial display form of at least a part of the first group of objects and the second group of objects are predetermined or randomly determined.
  • the first state parameter of each object in the first group of objects is a display position of the object
  • the first and second state parameters of each object in the second group of objects are respectively The display position and visual state of the object.
  • the first group of objects includes a first subset of objects and a second subset of objects
  • the second set of objects includes a third subset of objects
  • the first subset of objects and the third subset of objects are The controlled object
  • the second subset of objects is the target object.
  • the number of controlled objects may be preset, and when a predetermined number of controlled objects are sequentially coincident with the target object, it is determined that the living human face is detected.
  • the first group of objects includes a first subset of objects and a second subset of objects
  • the second set of objects includes a third subset of objects
  • the first subset of objects is a controlled object
  • the second subgroup object and the third subgroup object are target objects.
  • the number of target objects may be set in advance, and when the controlled objects are sequentially coincident with a predetermined number of target objects, it is determined that the living human face is detected.
  • the first group of objects includes a first subset of objects and a second subset of objects
  • the second set of objects includes a third subset of objects and a fourth subset of objects
  • the first subset of objects and The third subgroup object is a controlled object
  • the second subgroup object and the fourth subgroup object are target objects.
  • the number of the first subset of objects and the second subset of objects, and the number of the third subset of objects and the fourth subset of objects may be preset.
  • Object pairs can be defined, each object pair including a controlled object and a target object.
  • the number of object pairs may be predefined, and when the controlled object in the predetermined number of object pairs coincides with the target object, it is determined that the living face is detected.
  • FIG. 9 illustrates an exemplary flow chart of a living body detection method 900 in accordance with an embodiment of the present disclosure.
  • a timer is initialized.
  • the timer may be initialized according to user input, or the timer may be automatically initialized when a face is detected in the captured image, or may be automatically initialized when a predetermined action of the face is detected in the captured image. Further, at least a portion of each of the first set of objects is displayed on the display screen after the timer is initialized.
  • an image (first image) of a predetermined shooting range is acquired in real time as a captured image.
  • the other image capturing device of the image captures a grayscale or color image of a predetermined shooting range as a captured image, which may be a photo or a frame in the video.
  • Step S930 corresponds to step S530 in FIG. 5, and details are not described herein again.
  • step S940 the control is performed based on the value of the first action attribute in the detected face action Display of the controlled object in the first set of virtual objects displayed before, and display the second set of virtual objects based on the value of the second action attribute in the detected face action.
  • step S950 Determining, in step S950, whether at least a portion of the controlled objects in the first group of virtual objects and the second group of virtual objects coincide with at least a portion of the target objects of the first group of virtual objects and the second group of virtual objects in a predetermined timing time.
  • the predetermined timing time may be predetermined.
  • the step S950 may include determining whether the timer exceeds a predetermined timing time and whether at least a portion of the controlled object sequentially coincides with at least a portion of the target object.
  • a timeout flag may be generated when the timer exceeds the predetermined timing time, and it may be determined according to the timeout flag whether the timer exceeds the predetermined timing time in step S950.
  • step S950 In a case where it is determined in step S950 that the timer exceeds the predetermined timing time and at least a part of the controlled object has not been sequentially coincident with at least a part of the target object, it is determined in step S970 that the living human face is not detected. In a case where it is determined in step S950 that the timer does not exceed the predetermined timing time and at least a part of the controlled object sequentially coincides with at least a part of the target object, it is determined in step S960 that the living human face is not detected. In a case where it is determined in step S950 that the timer has not exceeded the predetermined timing time and at least a part of the controlled object has not yet coincided with at least a part of the target object, the process returns to step S920.
  • the image (second image) of the predetermined shooting range is acquired as a captured image in real time, and steps S930-S950 are next performed.
  • the image acquired first is referred to as a first image
  • the image acquired thereafter is referred to as a second image. It should be understood that the first image and the second image are images within the same viewing range, only the time of acquisition is different.
  • Steps S920-S950 shown in Fig. 9 are repeatedly executed until it is determined in step S960 that the living human face is detected, or until it is determined in step S970 that the living human face is not detected.
  • FIG. 10A shows an example of a first set of objects.
  • the number of controlled objects is set to 2 in advance
  • the number of target objects is set to 1 in advance.
  • the first group of objects in an initial state, includes a first object A1 and a second object B, the first object A1 is a controlled object, and the second object B is a background object.
  • the background object is the target object.
  • a second set of objects is not shown in FIG. 10A, the second set of objects includes a third object A2, and the third object A2 is a controlled object.
  • the display positions of the first object A1, the third object A2, and/or the target object B are randomly determined.
  • the display position coordinates of the first object A1 are updated according to the value of the first action attribute. Updating the visual state value of the third object A2 according to the value of the second action attribute, for example, the visual state value of 0 indicates invisibility, that is, the second object is not displayed; the visual state value is 1 indicating visual , that is, the second object is displayed.
  • the display position of the third object A2 coincides with the display position of the second object B, it is determined that the living human face is detected.
  • the display positions of the first object A1 and the third object A2 coincide with the display position of the target object B, it is determined that the living human face is detected.
  • the first object A1 is initially displayed and the third object A2 is not displayed, the display position of the first object A1 is changed according to the first action attribute, and the first change is performed according to the second action attribute.
  • the display position of the third object A2 is the same as the display position of the first object A1 when the second action attribute value is changed, and the display position of the third object A2 and the target object B are In the case where the display positions coincide, it is determined that the living body detection is successful.
  • the living body detection it is determined that the living body detection is successful only in the following scenario, that is, changing the display position of the first object A1 according to the first action attribute, and the first object is Moving A1 to the target object B, and then detecting a change of the second action attribute when the first object A1 is located at the target object B, and displaying the same at the target object B accordingly
  • the first object A1 is a sight
  • the second object B is a bull's-eye
  • the third object A2 is a bullet.
  • step S950 it is judged in step S950 whether the timer exceeds the predetermined timing time, and it is judged whether or not the third object A2 coincides with the second object B.
  • step S950 If it is determined in step S950 that the timer exceeds the predetermined timing time and the third object A2 has not been displayed, or the third object A2 has been displayed but does not coincide with the second object B, it is determined in step S970 that there is no A living face is detected.
  • step S950 it is determined in step S950 that the timer does not exceed the predetermined timing time and the third object A2 coincides with the second object B, it is determined in step S960 that a living human face is detected.
  • step S950 determines that the timer has not exceeded the predetermined timing time and the third object A2 has not been displayed yet.
  • FIG. 10B shows another example of the first group of objects and the second group of objects.
  • the number of controlled objects is set to 2 in advance
  • the number of target objects is set to 2 in advance.
  • the first group of objects includes a first object A1 and a second object B1, the first object A1 is a controlled object, and the second object B1 is a background object.
  • the background object is the target object.
  • a third object A2 and a fourth object B2 are also shown in FIG. 10B, the third object A2 being a controlled object, and the fourth object B2 being a background object. Specifically, when the first object A1 and the second object B1 are coincident, the third object A2 and the fourth object B2 are displayed as the second group of objects.
  • the value of the state parameter of at least one of the first object A1, the second object B1, the third object A2, and the fourth object B2 may be randomly determined. For example, the display positions of the first object A1, the second object B1, the third object A2, and the fourth object B2 are randomly determined.
  • the face action attribute includes a first action attribute and a second action attribute. Updating the display position coordinates of the first object A1 according to the value of the first action attribute, and updating the visual state values of the third and fourth objects according to the value of the second action attribute, for example, a visible state A value of 0 indicates invisibility, that is, the third and fourth objects are not displayed; a visual state value of 1 indicates that the third and fourth objects are displayed.
  • the display position coordinates of the third object may also be updated according to the value of the first action attribute.
  • the face action attribute further includes a third action attribute different from the first action attribute, and the display position coordinate of the third object is updated according to the value of the third action attribute.
  • the first object A1 and the second object B1 are initially displayed but the third object A2 and the fourth object B2 are not displayed, and the display position of the first object A1 is changed according to the first action attribute, according to The second action attribute changes a visual state of the second object.
  • the initial display position of the third object A2 may be determined according to the display position of the first object A1 when the second action attribute value is changed, or the initial display position of the third object A2 may be randomly determined.
  • the living body detection is successful only in the following scenario, that is, changing the display position of the first object A1 according to the first action attribute, and moving the first object A1 to the second object At B1, the change of the second action attribute is then detected when the first object A1 is located at the second object B1, and is determined according to the position at the random position or according to the display position of the second object B1. Displaying the third object A2 at the display position, and randomly displaying the fourth object B2, and then changing the third object A3 according to the first action attribute or a third action attribute different from the first action attribute The display position until the third object A2 is moved to the fourth object B2.
  • the first action attribute may include a first child action attribute and a second child action attribute
  • the first state parameter of the first object A1 may include a first sub-state parameter and a second sub-state parameter, a value of the first sub-state parameter of the first object A1 and the second sub-state
  • the values of the parameters are the horizontal position coordinates and the vertical position coordinates of the first object A, respectively, and the first object A may be separately updated according to the value of the first sub-action attribute and the value of the second sub-action attribute. Horizontal position coordinates and vertical position coordinates on the display screen.
  • the third action attribute may also include a third sub-action attribute and a fourth sub-action attribute
  • the first state parameter of the second object A2 may include a first sub-state parameter and a second sub-state parameter
  • the value of the first sub-state parameter of the second object A2 and the value of the second sub-state parameter are the horizontal position coordinate and the vertical position coordinate of the second object A2, respectively, and may be based on the value and the position of the third sub-action attribute.
  • the value of the fourth sub-action attribute is used to update the horizontal position coordinate and the vertical position coordinate of the second object A2 on the display screen, respectively.
  • the first sub-action attribute and the second sub-action attribute may be defined as a degree of face deflection and a degree of face pitch, respectively, or the third sub-action attribute and the fourth sub-action attribute may be respectively defined as an eye.
  • the degree of rotation and the degree of rotation of the eyes up and down.
  • the living body detecting device may be an electronic device integrated with a face image capturing device, such as a smart phone, a tablet computer, a personal computer, a face recognition based identification device, or the like.
  • the living body detecting apparatus may further include a separate face image collecting device and a detecting processing device, the detecting processing device may receive the captured image from the face image collecting device, and perform living body according to the received captured image Detection.
  • the detection processing device may be a server, a smart phone, a tablet computer, a personal computer, a face recognition based identification device, or the like.
  • the living body detecting apparatus Since the details of the various operations performed by the living body detecting apparatus are substantially the same as those of the living body detecting method described above with respect to FIGS. 2-4, in order to avoid repetition, only the living body detecting apparatus will be briefly described below, and the description will be omitted. A description of the same details.
  • the living body detecting apparatus 1100 includes a face motion detecting device 1110, a virtual object control device 1120, and a living body determining device 1130.
  • the face motion detecting device 1110, the virtual object control device 1120, and the living body determining device 1130 can be realized by the processor 102 shown in FIG. 1.
  • the living body detecting apparatus 1200 includes an image capturing device 1240, a face motion detecting device 1110, a virtual object control device 1120, a living body determining device 1130, a display device 1250, and a storage device 1260.
  • the image capturing device 1240 can be implemented by the image capturing device 110 shown in FIG. 1
  • the face motion detecting device 1110 , the virtual object control device 1120 , and the living body determining device 1130 can be implemented by the processor 102 shown in FIG. 1
  • the display device 1250 This may be implemented by the output device 108 shown in FIG. 1, which may be implemented by the storage device 104 shown in FIG.
  • the image capturing device 1240 in the living body detecting device 1200 or other image capturing device that can transmit an image to the living body detecting device 1100 or 1200 independently of the living body detecting device 1100 or 1200 can be used to acquire the gradation of the predetermined shooting range or
  • the color image is a captured image, and the captured image may be a photo or a frame in the video.
  • the image capture device may be a camera of a smart phone, a camera of a tablet, a camera of a personal computer, or even a webcam.
  • the face motion detecting device 1110 is configured to detect a face motion from the captured image.
  • the face motion detecting device 1110 may include a key point positioning device 1310, a texture information extracting device 1320, and an action attribute determining device 1330.
  • the keypoint locating device 1310 is configured to locate a human key point in the captured image. As an example, the key point locating device 1310 may first determine whether a captured face is included in the acquired image, and locate a face key point in the case where the face is detected. The details of the operation of the key point locating device 1310 are the same as those described in step S310, and details are not described herein again.
  • the texture information extracting means 1320 is configured to extract image texture information from the captured image.
  • the texture information extracting device 1320 may extract fine information of a face, such as eyeball position information, mouth shape information, micro-expression information, and the like, according to pixel information in the captured image, such as brightness information of a pixel.
  • the action attribute determining means 1330 obtains a value of the face action attribute based on the located face key point and/or the image texture information.
  • the facial motion attribute obtained based on the located face key points may include, for example, but is not limited to, degree of eye closure, degree of mouth opening, degree of face pitch, degree of face deflection, distance of face from camera, and the like.
  • the facial motion attribute obtained based on the image texture information may include, but is not limited to, a degree of left and right eye deflection, an eyeball vertical deflection degree, and the like.
  • the details of the operation of the action attribute determining means 1330 are the same as those described in the step S330, and details are not described herein again.
  • the virtual object control device 1120 is configured to control the controlled object in the currently displayed first group of virtual objects based on a display state of the first group of virtual objects currently displayed on the display screen and the detected face motion Displaying and controlling display on the display device 1250 to display a second set of virtual objects on the display device 1250
  • the state of the virtual object displayed on the display screen may be changed according to the detected face motion control. Updating the display of at least one of the first set of virtual objects on the display screen according to the detected face motion.
  • the initial display position and/or initial display form of at least a portion of the first set of virtual objects is predetermined or randomly determined. Specifically, for example, the motion state, display position, size, shape, color, and the like of the virtual object can be changed.
  • a new virtual object ie a second set of virtual objects
  • a new virtual object may be displayed on the display screen according to the detected facial motion control.
  • a new virtual object that is, a second set of virtual objects
  • An initial display position and/or an initial display form of at least a portion of the at least one object of the second set of objects is predetermined or randomly determined.
  • the state parameter attributes of the second set of virtual objects may include at least a visual state. Controlling display of at least one of the first set of objects according to values of the first set of facial motion attributes, and may be based on values of the second set of facial motion attributes or at least the first set of objects A display condition of an object to control whether to display at least one of the second set of objects.
  • the virtual object control device 1120 may include a face action mapping device 1410 and a virtual object presenting device 1420.
  • the face action attribute includes a first action attribute.
  • the face motion mapping device 1410 updates the value of the state parameter of the controlled object in the first group of virtual objects according to the value of the first action attribute, and may also be displayed according to the current display on the display screen. The display state of the first set of virtual objects to update the value of the state parameter attribute of the second set of virtual objects.
  • the face action attribute may include first and second action attributes.
  • the face motion mapping device 1410 may update the value of the state parameter of the controlled object in the first group of virtual objects according to the value of the first action attribute, and may also be based on the second action attribute. The value of the state parameter attribute of the second set of virtual objects is updated.
  • a face action attribute can be mapped to a certain state parameter of the virtual object.
  • the user's eye degree of closure or degree of mouth opening may be mapped to the size of the virtual object, and the size of the virtual object may be updated according to the value of the user's degree of eye closure or degree of mouth opening.
  • the user's face pitch degree may be mapped to a vertical display position of the virtual object on the display screen, and the vertical display position of the virtual object on the display screen may be updated according to the value of the user's face pitch degree.
  • the mapping relationship between the face action attribute and the state parameter of the virtual object may be preset.
  • the virtual object presenting device 1420 displays the controlled object on the display screen according to the updated value of the state parameter of the controlled object.
  • the virtual object presentation device 1420 further displays the second group of virtual objects on the display screen according to the updated value of the state variables of the second group of virtual objects.
  • the living body judging device 1130 is configured to determine that at least a part of the controlled objects in the first group of virtual objects and the second group of virtual objects are sequentially in the first group of virtual objects and the second group of virtual objects In a case where at least a part of the target objects overlap, it is determined that the face in the captured image is a living human face.
  • the coincidence of the controlled object with the target object may include: position coincidence, position coincidence and the same size, position coincidence and shape are identical, the positions are coincident, and the colors are completely the same.
  • the face action mapping device 1410 and the virtual object presentation device 1420 can perform various operations in the first to second embodiments described above, and details are not described herein again.
  • the living body detecting apparatuses 1100 and 1200 may further include a timer for timing a predetermined timing time.
  • the timer can also be implemented by the processor 102.
  • the timer may be initialized according to user input, or the timer may be automatically initialized when a face is detected in the captured image, or may be automatically initialized when a predetermined action of the face is detected in the captured image.
  • the living body determining apparatus 1130 is configured to determine whether at least a part of the controlled objects in the first group of virtual objects and the second group of virtual objects are sequentially and in the predetermined timing time A set of virtual objects and at least a portion of the target objects of the second set of virtual objects coincide, and determining that the controlled objects of the first set of virtual objects and the second set of virtual objects are within the predetermined timing time In a case where at least a portion coincides with at least a part of the target object among the first group of virtual objects and the second group of virtual objects, the face in the captured image is determined to be a living face.
  • the storage device 1260 is configured to store the captured image. In addition, the storage device 1260 is further configured to store a state parameter and a state parameter value of the virtual object. In addition, the storage device 1260 is further configured to store the virtual object presented by the virtual object presentation device 1420 and store a background image or the like to be displayed on the display device 1250.
  • the storage device 1260 can store computer program instructions that, when executed by the processor 102, can implement a living body detection method in accordance with an embodiment of the present disclosure, and/or can implement an embodiment in accordance with the present disclosure.
  • a computer program product comprising a computer readable storage medium on which computer program instructions are stored.
  • the computer program instructions may implement a living body detecting method according to an embodiment of the present disclosure while being operated by a computer, and/or may implement a key point positioning device, a texture information extracting device, and an action in the living body detecting device according to an embodiment of the present disclosure.
  • the attribute determines all or part of the functionality of the device.
  • the living body detecting method and apparatus and the computer program product of the embodiments of the present disclosure by controlling the virtual object display based on the face motion and performing the living body detection according to the virtual object display, the photo and video can be effectively prevented without depending on the special hardware device. Attacks in various ways, such as 3D face models or masks, can reduce the cost of living body detection. Further, by identifying a plurality of action attributes in the face action, a plurality of state variables of the virtual object can be controlled, and the virtual object can be caused to change the display state in multiple aspects, for example, causing the virtual object to perform a complex predetermined action. Or causing the virtual object to achieve a display effect that is greatly different from the initial display effect. Therefore, the accuracy of the living body detection can be further improved, and further, the safety of applying the living body detecting method and apparatus according to the embodiment of the present invention and the application scenario of the computer program product can be improved.
  • the computer readable storage medium can be any combination of one or more computer readable storage media.
  • the computer readable storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet, a hard disk of a personal computer, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory. (EPROM), Portable Compact Disk Read Only Memory (CD-ROM), USB memory, or any combination of the above storage media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé et un dispositif de détection de corps vivant, ainsi qu'un produit programme d'ordinateur, se rapportant au domaine technique de la reconnaissance faciale. Le procédé de détection de corps vivant consiste à : détecter un mouvement facial à partir d'une image de photographie ; sur la base de l'état d'affichage d'un premier ensemble d'objets virtuels actuellement affichés sur un écran d'affichage, ainsi que sur la base de mouvements faciaux détectés, commander l'affichage d'un objet commandé dans ledit premier ensemble d'objets virtuels actuellement affichés et commander l'affichage du second ensemble d'objets virtuels ; et, si au moins une partie de l'objet commandé dans ledit premier ensemble d'objets virtuels et ledit second ensemble d'objets commandés se chevauche séquentiellement avec au moins une partie dudit premier ensemble d'objets virtuels et dudit second ensemble d'objets virtuels, alors déterminer que le visage dans ladite image photographiée est un visage de corps vivant. Par commande de l'affichage d'objet virtuel sur la base de mouvements faciaux et réalisation d'une détection de corps vivant selon l'affichage d'objet virtuel, il est possible d'empêcher de manière efficace une attaque à l'aide d'un moyen tel qu'une photographie, une vidéo, un modèle de visage tridimensionnel (3D), ou un masque facial.
PCT/CN2015/082828 2015-06-30 2015-06-30 Procédé et dispositif de détection de corps vivant et produit programme d'ordinateur WO2017000217A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201580000358.7A CN105518715A (zh) 2015-06-30 2015-06-30 活体检测方法及设备、计算机程序产品
PCT/CN2015/082828 WO2017000217A1 (fr) 2015-06-30 2015-06-30 Procédé et dispositif de détection de corps vivant et produit programme d'ordinateur

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/082828 WO2017000217A1 (fr) 2015-06-30 2015-06-30 Procédé et dispositif de détection de corps vivant et produit programme d'ordinateur

Publications (1)

Publication Number Publication Date
WO2017000217A1 true WO2017000217A1 (fr) 2017-01-05

Family

ID=55725029

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/082828 WO2017000217A1 (fr) 2015-06-30 2015-06-30 Procédé et dispositif de détection de corps vivant et produit programme d'ordinateur

Country Status (2)

Country Link
CN (1) CN105518715A (fr)
WO (1) WO2017000217A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171211A (zh) * 2018-01-19 2018-06-15 百度在线网络技术(北京)有限公司 活体检测方法和装置
CN111353842A (zh) * 2018-12-24 2020-06-30 阿里巴巴集团控股有限公司 推送信息的处理方法和系统
CN116452703A (zh) * 2023-06-15 2023-07-18 深圳兔展智能科技有限公司 用户头像生成方法、装置、计算机设备及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107808115A (zh) * 2017-09-27 2018-03-16 联想(北京)有限公司 一种活体检测方法、装置及存储介质
CN111240482B (zh) * 2020-01-10 2023-06-30 北京字节跳动网络技术有限公司 一种特效展示方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216887A (zh) * 2008-01-04 2008-07-09 浙江大学 照片人脸与活体人脸的计算机自动鉴别方法
KR20100109723A (ko) * 2009-04-01 2010-10-11 삼성전자주식회사 촬상장치 및 그 제어방법
CN103400122A (zh) * 2013-08-20 2013-11-20 江苏慧视软件科技有限公司 一种活体人脸的快速识别方法
CN103593598A (zh) * 2013-11-25 2014-02-19 上海骏聿数码科技有限公司 基于活体检测和人脸识别的用户在线认证方法及系统
CN104166835A (zh) * 2013-05-17 2014-11-26 诺基亚公司 用于识别活体用户的方法和装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100514353C (zh) * 2007-11-26 2009-07-15 清华大学 一种基于人脸生理性运动的活体检测方法及系统
CN201845368U (zh) * 2010-09-21 2011-05-25 北京海鑫智圣技术有限公司 具有活体检测功能的人脸指纹门禁
US9025830B2 (en) * 2012-01-20 2015-05-05 Cyberlink Corp. Liveness detection system based on face behavior
CN103778360A (zh) * 2012-10-26 2014-05-07 华为技术有限公司 一种基于动作分析的人脸解锁的方法和装置
US8856541B1 (en) * 2013-01-10 2014-10-07 Google Inc. Liveness detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216887A (zh) * 2008-01-04 2008-07-09 浙江大学 照片人脸与活体人脸的计算机自动鉴别方法
KR20100109723A (ko) * 2009-04-01 2010-10-11 삼성전자주식회사 촬상장치 및 그 제어방법
CN104166835A (zh) * 2013-05-17 2014-11-26 诺基亚公司 用于识别活体用户的方法和装置
CN103400122A (zh) * 2013-08-20 2013-11-20 江苏慧视软件科技有限公司 一种活体人脸的快速识别方法
CN103593598A (zh) * 2013-11-25 2014-02-19 上海骏聿数码科技有限公司 基于活体检测和人脸识别的用户在线认证方法及系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171211A (zh) * 2018-01-19 2018-06-15 百度在线网络技术(北京)有限公司 活体检测方法和装置
CN111353842A (zh) * 2018-12-24 2020-06-30 阿里巴巴集团控股有限公司 推送信息的处理方法和系统
CN116452703A (zh) * 2023-06-15 2023-07-18 深圳兔展智能科技有限公司 用户头像生成方法、装置、计算机设备及存储介质
CN116452703B (zh) * 2023-06-15 2023-10-27 深圳兔展智能科技有限公司 用户头像生成方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN105518715A (zh) 2016-04-20

Similar Documents

Publication Publication Date Title
WO2017000213A1 (fr) Procédé et dispositif de détection de corps vivant et produit-programme informatique
TWI751161B (zh) 終端設備、智慧型手機、基於臉部識別的認證方法和系統
US10546183B2 (en) Liveness detection
US10339402B2 (en) Method and apparatus for liveness detection
WO2017000218A1 (fr) Procédé et dispositif de détection de corps vivant et produit programme d'ordinateur
US10990803B2 (en) Key point positioning method, terminal, and computer storage medium
EP3332403B1 (fr) Détection de caractère vivant
US9985963B2 (en) Method and system for authenticating liveness face, and computer program product thereof
CN105184246B (zh) 活体检测方法和活体检测系统
CN105612533B (zh) 活体检测方法、活体检测系统以及计算机程序产品
JP6809226B2 (ja) 生体検知装置、生体検知方法、および、生体検知プログラム
JP2018160237A (ja) 顔認証方法及び装置
CN108875468B (zh) 活体检测方法、活体检测系统以及存储介质
WO2017000217A1 (fr) Procédé et dispositif de détection de corps vivant et produit programme d'ordinateur
US10846514B2 (en) Processing images from an electronic mirror
WO2016172923A1 (fr) Procédé de détection de vidéo, système de détection de vidéo, et produit programme d'ordinateur
WO2018103416A1 (fr) Procédé et dispositif de détection d'image faciale
WO2019090901A1 (fr) Procédé et appareil de sélection d'affichage d'images, terminal intelligent et support de stockage
US20230306792A1 (en) Spoof Detection Based on Challenge Response Analysis
TWI466070B (zh) 眼睛搜尋方法及使用該方法的眼睛狀態檢測裝置與眼睛搜尋裝置
JP2008123360A (ja) 人体特定領域抽出・判定装置、人体特定領域抽出・判定方法、人体特定領域抽出・判定プログラム
US11507646B1 (en) User authentication using video analysis
WO2020133405A1 (fr) Procédé et dispositif de commande d'un robot à télécommande au sol
CN113239887A (zh) 活体检测方法及装置、计算机可读存储介质和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15896748

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 08.05.2018)

122 Ep: pct application non-entry in european phase

Ref document number: 15896748

Country of ref document: EP

Kind code of ref document: A1