CN109977764A - Vivo identification method, device, terminal and storage medium based on plane monitoring-network - Google Patents

Vivo identification method, device, terminal and storage medium based on plane monitoring-network Download PDF

Info

Publication number
CN109977764A
CN109977764A CN201910111148.5A CN201910111148A CN109977764A CN 109977764 A CN109977764 A CN 109977764A CN 201910111148 A CN201910111148 A CN 201910111148A CN 109977764 A CN109977764 A CN 109977764A
Authority
CN
China
Prior art keywords
posture
characteristic point
test object
human face
flat image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910111148.5A
Other languages
Chinese (zh)
Inventor
王路生
陆进
陈斌
宋晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910111148.5A priority Critical patent/CN109977764A/en
Publication of CN109977764A publication Critical patent/CN109977764A/en
Priority to PCT/CN2019/118553 priority patent/WO2020164284A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention belongs to vivo identification technical field more particularly to a kind of vivo identification method based on plane monitoring-network, device, terminal and storage mediums.Wherein, the vivo identification method includes: the flat image for obtaining test object;Extract the human face characteristic point on the flat image;The corresponding integral head posture of the flat image is determined based on the human face characteristic point of extraction;The human face characteristic point of extraction is divided into multiple local feature groups, and determines the corresponding local header posture of each local feature group;Calculate the posture difference of the local header posture Yu the integral head posture;It whether is living body based on test object described in the posture diversity judgement.The present invention realizes vivo identification by flat image detection, is conducive to improve recognition efficiency and reduces equipment cost.

Description

Vivo identification method, device, terminal and storage medium based on plane monitoring-network
Technical field
The invention belongs to vivo identification technical field more particularly to a kind of vivo identification methods based on plane monitoring-network, dress It sets, terminal and storage medium.
Background technique
Currently, vivo identification is mainly used for determining the real physiological feature of test object in some authentication scenes. For example, in face recognition application, by verifying current inspection to blinking, opening one's mouth, shaking the head, putting first-class combinative movement and detect Survey whether object is true living body.Photo can be effectively resisted, changes face, mask, block and the common work such as screen reproduction Body attack means ensure the interests of user to carry out the examination of fraud.
Existing human face in-vivo detection method is primarily present following problems: first, it is high to calculate time cost, for example, needing The variation of the non-rigid operation of face is calculated using three-dimensional depth information, using optical flow method, calculating process is complicated;Second, it needs Cooperation identification is carried out using additional living things feature recognition equipment, equipment cost is high, for example, needing to utilize additional infrared human body Detection device detects the temperature of test object, or needs to combine using sound collection equipment and carry out voice recognition etc..As it can be seen that Vivo identification mode in the prior art is complicated there are calculating process or needs outfit additionally identifies that equipment cost is higher and asks Topic.
Summary of the invention
In view of this, the present invention provides a kind of vivo identification method based on plane monitoring-network, device, terminal and storages to be situated between Matter, the calculating process existing for solve in a manner of vivo identification in the prior art it is complicated or need to be equipped with additional identification equipment at This higher problem.
The first aspect of the embodiment of the present invention provides a kind of vivo identification method based on plane monitoring-network, may include:
Obtain the flat image of test object;
Extract the human face characteristic point on the flat image;
The corresponding integral head posture of the flat image is determined based on the human face characteristic point of extraction;
The human face characteristic point of extraction is divided into multiple local feature groups, and determines the corresponding part of each local feature group Head pose;
Calculate the posture difference of the local header posture Yu the integral head posture;
It whether is living body based on test object described in the posture diversity judgement.
The second aspect of the embodiment of the present invention provides a kind of vivo identification device based on plane monitoring-network, may include:
Flat image acquiring unit, for obtaining the face flat image of test object;
Feature point extraction unit, for extracting the human face characteristic point of the face flat image;
Whole posture determination unit determines the corresponding whole head of the flat image for the human face characteristic point based on extraction Portion's posture;
Local pose determination unit for the human face characteristic point of extraction to be divided into multiple local feature groups, and determines every The corresponding local header posture of a local feature group;
Posture difference calculation units, the posture for calculating the local header posture and the integral head posture are poor It is different;
Living body judging unit, for whether being living body based on test object described in the posture diversity judgement.
The third aspect of the embodiment of the present invention provides a kind of identification terminal, including memory, processor and is stored in In the memory and the computer-readable instruction that can run on the processor, the processor executes the computer can The first aspect of aforementioned present invention and examining in any possible implementation of first aspect based on plane are realized when reading instruction The step of vivo identification method of survey.
The fourth aspect of the embodiment of the present invention provides a kind of computer readable storage medium, the computer-readable storage Media storage has computer-readable instruction, and the first of aforementioned present invention is realized when the computer-readable instruction is executed by processor The step of vivo identification method based on plane monitoring-network in aspect and any possible implementation of first aspect.
Existing beneficial effect is the present invention compared with prior art:
The present invention utilizes the human face characteristic point on flat image to determine detection pair by the flat image of acquisition test object As the integral head posture and local header posture on the flat image, and according to integral head posture and local header posture Posture difference judge whether test object is living body, the present invention can solve that existing vivo identification mode is existing calculates Process is complicated or needs to be equipped with the additional identification higher problem of equipment cost;Namely, on the one hand, identification process of the invention can Not to be related to three-dimensional depth information, calculating process is simplified, is conducive to improve vivo identification efficiency;On the other hand, of the invention Identification process reduces the equipment cost for carrying out vivo identification without additional living things feature recognition equipment is introduced.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only of the invention some Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these Attached drawing obtains other attached drawings.
Fig. 1 is one embodiment flow chart of the vivo identification method based on plane monitoring-network in the embodiment of the present invention;
Fig. 2 is one embodiment flow chart of step S103 in embodiment illustrated in fig. 1;
Fig. 3 is another embodiment flow chart of the vivo identification method based on plane monitoring-network in the embodiment of the present invention;
Fig. 4 is one embodiment structure chart of the vivo identification device based on plane monitoring-network in the embodiment of the present invention;
Fig. 5 is a kind of schematic block diagram of identification terminal in the embodiment of the present invention.
Specific embodiment
In order to make the invention's purpose, features and advantages of the invention more obvious and easy to understand, below in conjunction with the present invention Attached drawing in embodiment, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that disclosed below Embodiment be only a part of the embodiment of the present invention, and not all embodiment.Based on the embodiments of the present invention, this field Those of ordinary skill's all other embodiment obtained without making creative work, belongs to protection of the present invention Range.
Referring to Fig. 1, for a kind of one embodiment of the vivo identification method based on plane monitoring-network in the embodiment of the present invention Flow chart may include:
In step s101, the flat image of test object is obtained.
In embodiments of the present invention, the flat image of test object is obtained first, and flat image refers to two dimensional image, specifically It can be obtained by being configured with the terminal of imaging sensor.For example, can be obtained by the mobile phone terminal configured with camera Get the flat image of test object.
In step s 102, the human face characteristic point on the flat image is extracted.
In embodiments of the present invention, human face characteristic point refers to the pixel that can be used for embodying face characteristic on flat image Or pixel collection, each human face characteristic point can reflect a feature of face;Specifically, be embodied on flat image, Each human face characteristic point can be a pixel, or the set of multiple neighbor pixels, such as multiple adjacent pixels The block of pixels that point is constituted.
In embodiments of the present invention, the human face characteristic point of extraction can be predefined multiple human face characteristic points, for example, mentioning The human face characteristic point taken may include nose, chin, the left comer of left eye, the right corner of right eye, the left comer of mouth and right corner of mouth et al. Face position corresponding pixel or pixel collection on flat image.
In step s 103, the corresponding integral head posture of the flat image is determined based on the human face characteristic point of extraction.
In embodiments of the present invention, for a given sheet of planar image, the head pose of personage thereon may include Pitch (spins upside down, such as bows and face upward head), yaw (left and right overturning, such as left-hand rotation or left-hand rotation head), roll (plane inward turning Turn, such as bow to the left or to the right) three kinds of angles.In the direction of three-dimensional space, which can for integral head posture namely face Think one of the angle in above-mentioned three kinds of three-dimensional space.
In embodiments of the present invention, face is reflected on flat image, is mainly reflected in the different directions of three-dimensional space The different position distributions of each human face characteristic point.Therefore, it can be determined according to the position distribution of each human face characteristic point of extraction Face on the flat image, namely can be with according to the position distribution of human face characteristic point on flat image in the direction of three-dimensional space Obtain the corresponding integral head posture in three-dimensional space of the flat image.
In one implementation, the neural network of integral head Attitude estimation can be used for by establishing, and using Determine that the flat image of head pose is trained and deep learning as neural network of the training sample to foundation, it will be to be determined The flat image of head pose inputs trained neural network, so that it is determined that the integral head posture of flat image.
In addition, in another implementation, face on flat image can also be utilized by based on apparent method The estimation of appearance features realization integral head posture.
Optionally, as shown in Fig. 2, one embodiment of above-mentioned steps S103 may include:
Step S1031, position distribution of the human face characteristic point extracted on the flat image is obtained, first position is obtained Distribution.
Step S1032, the posture of preset three-dimensional face model is adjusted, and obtains the three-dimensional face during the adjustment The position distribution of projection of the human face characteristic point on two-dimensional surface on model obtains second position distribution.
In embodiments of the present invention, on the human face characteristic point on the three-dimensional face model and the flat image of extraction Human face characteristic point correspond.
Step S1033, the three-dimensional face mould when second position distribution and consistent first position distribution is obtained The spatial attitude of type, obtains targeted attitude.
Step S1031, the targeted attitude is determined as the corresponding integral head posture of the flat image.
In embodiments of the present invention, the three-dimensional face model that a standard can be preset, (such as just from reference direction To) start to rotate the three-dimensional face model to adjust its facial orientation, and monitor the spy of each face on the three-dimensional face model The change in location for levying point, when the position distribution and extraction of projection of the human face characteristic point on three-dimensional face model on two-dimensional surface Flat image on human face characteristic point position distribution it is consistent or close to it is consistent when, obtain the azimuth of three-dimensional face model Rotation information is spent, orientation angles rotation information, that is, three-dimensional face model head pose namely flat image at this time is corresponding Integral head posture.
Specifically, projection of the human face characteristic point on three-dimensional face model on two-dimensional surface calculates, camera can use The method of standardization and direct linear transformation (DirectLinear Transform, DLT) realizes, namely from world coordinate system (three-dimensional) arrives the conversion of image coordinate system (two-dimensional surface).
It is divided into multiple local feature groups in step S104, by the human face characteristic point of extraction, and determines that each part is special The corresponding local header posture of sign group.
In embodiments of the present invention, the human face characteristic point of extraction is grouped, available multiple local feature groups are right In each local feature group, three-dimensional face model is adjusted by above-mentioned same mode, the available local feature group is corresponding Local header posture.
Optionally, above-mentioned steps S104 may include:
It is one group with the human face characteristic point of specified quantity and the human face characteristic point of extraction is divided into multiple local feature groups;
Calculate each local feature group corresponding local space when the three-dimensional face model is in the targeted attitude Posture;
By each local feature group when the three-dimensional face model is in the targeted attitude corresponding local space appearance State is determined as the corresponding local header posture of the local feature group.
In embodiments of the present invention, the specified quantity is the integer not less than 3, and the specified quantity is less than the extraction Human face characteristic point quantity, such as specified quantity can be 3, that is, every three human face characteristic points are as a local feature Group.
Illustratively, three nose, the left comer of left eye, the right corner of right eye human face characteristic points are divided into one group and obtain first Local feature group, when three-dimensional face model is in above-mentioned targeted attitude, by nose, the left eye on the three-dimensional face model Left comer, the direction of the corresponding plane of three human face characteristic points of the right corner of right eye are determined as the corresponding local head of first partial feature group Portion's posture.
Similarly, three chin, the left comer of left eye, the right corner of right eye human face characteristic points are divided into one group and obtain the second part Feature group, when three-dimensional face model is in above-mentioned targeted attitude, by the chin of the three-dimensional face model, left eye left comer, The direction of plane is determined as the corresponding local header posture of the second local feature group where three human face characteristic points of right corner of right eye.
In step s105, the posture difference of the local header posture and the integral head posture is calculated.
In embodiments of the present invention, the entirety of each local feature group corresponding local header posture and flat image is counted The posture difference of head pose, and may further judge whether test object is living body by the posture difference of statistics.
Further, above-mentioned steps S105 may include:
Obtain attitude vectors when above-mentioned three-dimensional face model is in above-mentioned targeted attitude;
Obtain plane normal vector of each local feature group when the three-dimensional face model is in the targeted attitude;
Calculate the angle of each plane normal vector and the attitude vectors, wherein the size of the angle indicates the posture The size of difference.
In embodiments of the present invention, vectorization expression is carried out to integral head posture and local header posture, with three-dimensional people Attitude vectors when face model is in above-mentioned targeted attitude indicate integral head posture, are in above-mentioned mesh with three-dimensional face model The plane normal vector of plane where each local feature group indicates local header posture when marking posture, calculate each plane normal vector with The angle of attitude vectors when three-dimensional face model is in above-mentioned targeted attitude, indicated by the angle local header posture with The posture difference of integral head posture, angle is bigger, indicates that the local header posture and the posture difference of integral head posture are got over Greatly, angle is smaller, indicates that the posture difference of the local header posture and integral head posture is smaller.
It in step s 106, whether is living body based on test object described in the posture diversity judgement.
In embodiments of the present invention, posture difference reflects the part plan on head and the difference of whole posture, and practical On, for a live subject, a posture difference usually not more than threshold value, then, by by posture difference and threshold value Be compared, that is, can determine whether test object is living body, for example, when the corresponding local header posture of certain local feature group with it is whole When the posture difference of body head pose is greater than given threshold, it can be determined that correspondence detected is not living body.
Further, above-mentioned steps 106 may include:
The angle of statistics and the attitude vectors is greater than the first quantity of the local feature group of the first preset threshold;
If first quantity is greater than the first designated value, determine that the test object is not living body;
If first quantity is not more than first designated value, determine the test object for living body.
It in embodiments of the present invention, can be by counting big with the angle of the attitude vectors in order to improve accuracy of identification This inspection is determined when first quantity is greater than the first designated value in the first quantity of the local feature group of the first preset threshold Surveying object is not living body, when first quantity is not more than the first designated value, determines this test object for living body.
Illustratively, when carrying out living body attack with tablet computer or display, (test object is on plate or display Personage), the head pose of face towards (integral head posture) with display plane and camera (acquisition plane image Equipment) be directed toward between variable angle can be basically unchanged, this is because attack head portrait video (image) guarantee, with natural person Face is compared, and the angle that part plan (local header posture) is directed toward with camera will be incorrect namely two vector meetings Occur angle difference (posture difference).
It is another illustrative, it is assumed that attack is hand-held photo attack, then as previously mentioned, head part's entirety posture is constant, But due to the variation of paper distortion and hand-held inclination angle, these deformation can be detected by the normal vector of part plan.Therefore, of the invention Embodiment multiple groups plane normal vector obtained by calculation (local header posture) and head totality attitude vectors (integral head appearance State) angle, and size based on the angle carries out living body judgement, due to the corresponding each group plane normal vector of true living object It can be maintained within preset threshold with the angle of head totality attitude vectors, therefore the size based on angle can carry out living body sentences It is fixed.
In conclusion the present invention utilizes the human face characteristic point on flat image by the flat image of acquisition test object Determine integral head posture and local header posture of the test object on the flat image, and according to integral head posture drawn game The posture difference of portion's head pose judges whether test object is living body, and the present invention can solve existing vivo identification mode Existing calculating process is complicated or needs to be equipped with the additional identification higher problem of equipment cost;Namely, on the one hand, it is of the invention Identification process can not be related to three-dimensional depth information, simplify calculating process, be conducive to improve vivo identification efficiency;Another party Face, identification process of the invention reduce the equipment for carrying out vivo identification without additional living things feature recognition equipment is introduced Cost.
Referring to Fig. 3, for another implementation of the vivo identification method based on plane monitoring-network a kind of in the embodiment of the present invention , may include:
Step S301, the flat image of test object is obtained.
Step S302, the human face characteristic point on the flat image is extracted.
Step S303, the corresponding integral head posture of the flat image is determined based on the human face characteristic point of extraction.
Step S304, the human face characteristic point of extraction is divided into multiple local feature groups, and determines each local feature group Corresponding local header posture.
Step S305, the posture difference of the local header posture and the integral head posture is calculated.
In embodiments of the present invention, for details, reference can be made to the steps in embodiment illustrated in fig. 1 by above-mentioned steps S301 to step S305 Rapid S101 is to step S105, and details are not described herein.
Step S306, output is used to indicate the movement instruction that the test object carries out specified headwork.
Step S307, the posture the change of divergence rate when test object carries out the headwork is monitored;
Step S308, judge whether the test object is living body based on the posture the change of divergence rate.
It in embodiments of the present invention, can also be in actual implementation in order to further increase the accuracy rate of identification, it is desirable that inspection It surveys object and carries out the upper inferior one or more specified postural changes in left and right, and since attack image can not be fixed, it can mention in this way The validity of high algorithm.
In the action process that the head of test object carries out postural change, each planar process vector sum can also be assessed in real time The variable angle rate of head pose vector, the variable angle rate reflect whole head when test object carries out the headwork The posture the change of divergence rate of portion's posture and local header posture can also carry out vivo identification judgement according to variable angle rate.
In the specific implementation, available test object carries out a flat image after the headwork, and base A posture difference is obtained in the way of similar to the above in the flat image, which is moved with the head is carried out The corresponding posture difference of flat image before work is compared, available posture the change of divergence rate.
Further, above-mentioned steps S308 may include:
Count second quantity of the posture the change of divergence rate greater than the local feature group of the second preset threshold;
If second quantity is greater than the second designated value, determine that the test object is not living body;
If second quantity is not more than second designated value, determine the test object for living body.
In embodiments of the present invention, the local feature group of the second preset threshold is greater than by statistics posture the change of divergence rate Second quantity, when second quantity be greater than the second designated value when, determine that this test object is not living body, when second quantity not When greater than the second designated value, determine this test object for living body.
In conclusion the present invention utilizes the human face characteristic point on flat image by the flat image of acquisition test object Determine integral head posture and local header posture of the test object on the flat image, and according to integral head posture drawn game The posture difference of portion's head pose judges whether test object is living body, and the present invention can solve existing vivo identification mode Existing calculating process is complicated or needs to be equipped with the additional identification higher problem of equipment cost;Namely, on the one hand, it is of the invention Identification process can not be related to three-dimensional depth information, simplify calculating process, be conducive to improve vivo identification efficiency;Another party Face, identification process of the invention reduce the equipment for carrying out vivo identification without additional living things feature recognition equipment is introduced Cost.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit It is fixed.
Corresponding to, based on the vivo identification method of plane monitoring-network, Fig. 4 shows implementation of the present invention described in foregoing embodiments A kind of one embodiment structure chart for vivo identification device based on plane monitoring-network that example provides.
In the present embodiment, the vivo identification device 4 based on plane monitoring-network may include: flat image acquiring unit 41, spy Sign point extraction unit 42, whole posture determination unit 43, local pose determination unit 44, posture difference calculation units 45 and living body Judging unit 46.
Flat image acquiring unit 41, for obtaining the face flat image of test object;
Feature point extraction unit 42, for extracting the human face characteristic point of the face flat image;
Whole posture determination unit 43 determines the corresponding entirety of the flat image for the human face characteristic point based on extraction Head pose;
Local pose determination unit 44 for the human face characteristic point of extraction to be divided into multiple local feature groups, and determines The corresponding local header posture of each local feature group;
Posture difference calculation units 45, the posture for calculating the local header posture and the integral head posture are poor It is different;
Living body judging unit 46, for whether being living body based on test object described in the posture diversity judgement.
Optionally, the vivo identification device 4 based on plane monitoring-network can also include:
First acquisition unit is obtained for obtaining position distribution of the human face characteristic point extracted on the flat image First position distribution;
Second acquisition unit, for adjusting the posture of preset three-dimensional face model, and during the adjustment described in acquisition The position distribution of projection of the human face characteristic point on two-dimensional surface on three-dimensional face model obtains second position distribution, wherein Human face characteristic point on the flat image of human face characteristic point and extraction on the three-dimensional face model corresponds;
Third acquiring unit, three-dimensional when for obtaining the second position distribution and consistent first position distribution The spatial attitude of faceform, obtains targeted attitude;
Whole posture determination unit 43 is specifically used for, and the targeted attitude is determined as the corresponding entirety of the flat image Head pose.
Optionally, the vivo identification device 4 based on plane monitoring-network can also include:
Feature group division unit, for being that one group of human face characteristic point by extraction divides with the human face characteristic point of specified quantity For multiple local feature groups, wherein the specified quantity is less than the quantity of the human face characteristic point of the extraction;
Spatial attitude computing unit is in the target in the three-dimensional face model for calculating each local feature group Corresponding local space posture when posture;
Local pose determination unit 44 is specifically used for, and each local feature group is in described in the three-dimensional face model Corresponding local space posture is determined as the corresponding local header posture of the local feature group when targeted attitude.
Optionally, the vivo identification device 4 based on plane monitoring-network can also include:
Primary vector acquiring unit, for obtain the posture when three-dimensional face model is in the targeted attitude to Amount;
Secondary vector acquiring unit is in the target appearance in the three-dimensional face model for obtaining each local feature group Plane normal vector when state;
Posture difference calculation units 45 are specifically used for, and calculate the angle of each plane normal vector and the attitude vectors, wherein The size of the angle indicates the size of the posture difference.
Optionally, the vivo identification device 4 based on plane monitoring-network can also include:
First quantity statistics unit, it is special for counting the part for being greater than the first preset threshold with the angle of the attitude vectors First quantity of sign group;
Living body judging unit 46 is specifically used for, if first quantity is greater than the first designated value, determines the detection pair As not being living body;If first quantity is not more than first designated value, determine the test object for living body.
Optionally, the vivo identification device 4 based on plane monitoring-network can also include:
Indicating unit is acted, is referred to for exporting to be used to indicate the test object and carry out the movement of specified headwork Show;
Difference monitoring unit, for monitoring the posture the change of divergence rate when test object carries out the headwork;
Living body judging unit 46 is also used to, and judges whether the test object is living based on the posture the change of divergence rate Body.
Optionally, the vivo identification device 4 based on plane monitoring-network can also include:
Second quantity statistics unit, the local feature for being greater than the second preset threshold for counting the posture the change of divergence rate Second quantity of group;
Living body judging unit 46 is specifically also used to, if second quantity is greater than the second designated value, determines the detection Object is not living body;If second quantity is not more than second designated value, determine the test object for living body.
In conclusion the present invention utilizes the human face characteristic point on flat image by the flat image of acquisition test object Determine integral head posture and local header posture of the test object on the flat image, and according to integral head posture drawn game The posture difference of portion's head pose judges whether test object is living body, and the present invention can solve existing vivo identification mode Existing calculating process is complicated or needs to be equipped with the additional identification higher problem of equipment cost;Namely, on the one hand, it is of the invention Identification process can not be related to three-dimensional depth information, simplify calculating process, be conducive to improve vivo identification efficiency;Another party Face, identification process of the invention reduce the equipment for carrying out vivo identification without additional living things feature recognition equipment is introduced Cost.
It is apparent to those skilled in the art that for convenience and simplicity of description, the device of foregoing description, The specific work process of module and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment The part of load may refer to the associated description of other embodiments.
The schematic block diagram that Fig. 5 shows a kind of identification terminal provided in an embodiment of the present invention is only shown for ease of description Part related to the embodiment of the present invention.
In the present embodiment, the identification terminal 5 can be desktop PC, notebook, palm PC and cloud clothes Business device etc. calculates equipment.The identification terminal 5 can include: processor 50, memory 51 and be stored in the memory 51 simultaneously The computer-readable instruction 52 that can be run on the processor 50, such as execute the above-mentioned vivo identification based on plane monitoring-network The computer-readable instruction of method.The processor 50 is realized above-mentioned each based on flat when executing the computer-readable instruction 52 Step in the vivo identification embodiment of face detection, such as step S101 shown in FIG. 1 to step S106.Alternatively, the processing Device 50 realizes the function of each unit in above-mentioned each Installation practice when executing the computer-readable instruction 52, such as shown in Fig. 4 The function of unit 41 to 46.
Illustratively, the computer-readable instruction 52 can be divided into one or more module/units, one Or multiple module/units are stored in the memory 51, and are executed by the processor 50, to complete the present invention.Institute Stating one or more module/units can be the series of computation machine readable instruction section that can complete specific function, the instruction segment For describing implementation procedure of the computer-readable instruction 52 in the identification terminal 5.
The processor 50 can be central processing unit (Central Processing Unit, CPU), can also be Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field- Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor Deng.
The memory 51 can be the internal storage unit of the identification terminal 5, such as the hard disk or interior of identification terminal 5 It deposits.The memory 51 is also possible to the External memory equipment of the identification terminal 5, such as be equipped on the identification terminal 5 Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge Deposit card (Flash Card) etc..Further, the memory 51 can also both include the storage inside list of the identification terminal 5 Member also includes External memory equipment.The memory 51 is for storing the computer-readable instruction and the identification terminal 5 Required other instruction and datas.The memory 51 can be also used for temporarily storing the number that has exported or will export According to.
The functional units in various embodiments of the present invention may be integrated into one processing unit, is also possible to each Unit physically exists alone, and can also be integrated in one unit with two or more units.Above-mentioned integrated unit both may be used To use formal implementation of hardware, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention substantially or Person says that all or part of the part that contributes to existing technology or the technical solution can body in the form of software products Reveal and, which is stored in a storage medium, including several computer-readable instructions are used so that one Platform computer equipment (can be personal computer, server or the network equipment etc.) executes described in each embodiment of the present invention The all or part of the steps of method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read- Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can be with Store the medium of computer-readable instruction.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality Applying example, invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution.

Claims (10)

1. a kind of vivo identification method based on plane monitoring-network characterized by comprising
Obtain the flat image of test object;
Extract the human face characteristic point on the flat image;
The corresponding integral head posture of the flat image is determined based on the human face characteristic point of extraction;
The human face characteristic point of extraction is divided into multiple local feature groups, and determines the corresponding local header of each local feature group Posture;
Calculate the posture difference of the local header posture Yu the integral head posture;
It whether is living body based on test object described in the posture diversity judgement.
2. the vivo identification method according to claim 1 based on plane monitoring-network, which is characterized in that described based on extraction Human face characteristic point determines that the corresponding integral head posture of the flat image includes:
Position distribution of the human face characteristic point extracted on the flat image is obtained, first position distribution is obtained;
The posture of preset three-dimensional face model is adjusted, and obtains the spy of the face on the three-dimensional face model during the adjustment The position distribution of projection of the sign point on two-dimensional surface, obtains second position distribution, wherein the people on the three-dimensional face model Human face characteristic point on face characteristic point and the flat image of extraction corresponds;
The spatial attitude for obtaining the three-dimensional face model when second position distribution is distributed consistent with the first position, obtains To targeted attitude;
The targeted attitude is determined as the corresponding integral head posture of the flat image.
3. the vivo identification method according to claim 2 based on plane monitoring-network, which is characterized in that the people by extraction Face characteristic point is divided into multiple local feature groups, and determines that the corresponding local header posture of each local feature group includes:
It is one group with the human face characteristic point of specified quantity and the human face characteristic point of extraction is divided into multiple local feature groups, wherein The specified quantity is less than the quantity of the human face characteristic point of the extraction;
Calculate each local feature group corresponding local space posture when the three-dimensional face model is in the targeted attitude;
By each local feature group, when the three-dimensional face model is in the targeted attitude, corresponding local space posture is true It is set to the corresponding local header posture of the local feature group.
4. the vivo identification method according to claim 3 based on plane monitoring-network, which is characterized in that described to calculate the office Portion's human face posture and the posture difference of the whole human face posture include:
Obtain the attitude vectors when three-dimensional face model is in the targeted attitude;
Obtain plane normal vector of each local feature group when the three-dimensional face model is in the targeted attitude;
Calculate the angle of each plane normal vector and the attitude vectors, wherein the size of the angle indicates the posture difference Size.
5. the vivo identification method according to claim 4 based on plane monitoring-network, which is characterized in that described to be based on the appearance Whether test object described in state diversity judgement is that living body includes:
The angle of statistics and the attitude vectors is greater than the first quantity of the local feature group of the first preset threshold;
If first quantity is greater than the first designated value, determine that the test object is not living body;
If first quantity is not more than first designated value, determine the test object for living body.
6. the vivo identification method according to any one of claims 1 to 4 based on plane monitoring-network, which is characterized in that counting After calculating the local facial posture and the posture difference of the whole human face posture further include:
Output is used to indicate the movement instruction that the test object carries out specified headwork;
Monitor the posture the change of divergence rate when test object carries out the headwork;
It is based on whether test object described in the posture diversity judgement is living body correspondingly, described, specifically:
Judge whether the test object is living body based on the posture the change of divergence rate.
7. the vivo identification method according to claim 6 based on plane monitoring-network, which is characterized in that described to be based on the appearance State the change of divergence rate judges whether the test object is living body, comprising:
Count second quantity of the posture the change of divergence rate greater than the local feature group of the second preset threshold;
If second quantity is greater than the second designated value, determine that the test object is not living body;
If second quantity is not more than second designated value, determine the test object for living body.
8. a kind of vivo identification device based on plane monitoring-network characterized by comprising
Flat image acquiring unit, for obtaining the face flat image of test object;
Feature point extraction unit, for extracting the human face characteristic point of the face flat image;
Whole posture determination unit determines the corresponding integral head appearance of the flat image for the human face characteristic point based on extraction State;
Local pose determination unit for the human face characteristic point of extraction to be divided into multiple local feature groups, and determines each office The corresponding local header posture of portion's feature group;
Posture difference calculation units, for calculating the posture difference of the local header posture Yu the integral head posture;
Living body judging unit, for whether being living body based on test object described in the posture diversity judgement.
9. a kind of identification terminal, including memory, processor and storage are in the memory and can be on the processor The computer-readable instruction of operation, which is characterized in that the processor realizes such as right when executing the computer-readable instruction It is required that the step of vivo identification method described in any one of 1 to 7 based on plane monitoring-network.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer-readable instruction, special Sign is, realized when the computer-readable instruction is executed by processor as described in any one of claims 1 to 7 based on putting down The step of vivo identification method of face detection.
CN201910111148.5A 2019-02-12 2019-02-12 Vivo identification method, device, terminal and storage medium based on plane monitoring-network Pending CN109977764A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910111148.5A CN109977764A (en) 2019-02-12 2019-02-12 Vivo identification method, device, terminal and storage medium based on plane monitoring-network
PCT/CN2019/118553 WO2020164284A1 (en) 2019-02-12 2019-11-14 Method and apparatus for recognising living body based on planar detection, terminal, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910111148.5A CN109977764A (en) 2019-02-12 2019-02-12 Vivo identification method, device, terminal and storage medium based on plane monitoring-network

Publications (1)

Publication Number Publication Date
CN109977764A true CN109977764A (en) 2019-07-05

Family

ID=67076912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910111148.5A Pending CN109977764A (en) 2019-02-12 2019-02-12 Vivo identification method, device, terminal and storage medium based on plane monitoring-network

Country Status (2)

Country Link
CN (1) CN109977764A (en)
WO (1) WO2020164284A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020164284A1 (en) * 2019-02-12 2020-08-20 平安科技(深圳)有限公司 Method and apparatus for recognising living body based on planar detection, terminal, and storage medium
CN111639582A (en) * 2020-05-26 2020-09-08 清华大学 Living body detection method and apparatus

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560742A (en) * 2020-12-23 2021-03-26 杭州趣链科技有限公司 Human face in-vivo detection method, device and equipment based on multi-scale local binary pattern
CN113724418B (en) * 2021-08-26 2023-07-04 广州小鹏自动驾驶科技有限公司 Data processing method, device and readable storage medium
CN115019400B (en) * 2022-07-19 2023-03-03 北京拙河科技有限公司 Illegal behavior detection method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392220A (en) * 2014-11-27 2015-03-04 苏州福丰科技有限公司 Three-dimensional face recognition airport security inspection method based on cloud server
CN105426827B (en) * 2015-11-09 2019-03-08 北京市商汤科技开发有限公司 Living body verification method, device and system
CN105550637B (en) * 2015-12-04 2019-03-08 小米科技有限责任公司 Profile independent positioning method and device
CN108062544A (en) * 2018-01-19 2018-05-22 百度在线网络技术(北京)有限公司 For the method and apparatus of face In vivo detection
CN109977764A (en) * 2019-02-12 2019-07-05 平安科技(深圳)有限公司 Vivo identification method, device, terminal and storage medium based on plane monitoring-network

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020164284A1 (en) * 2019-02-12 2020-08-20 平安科技(深圳)有限公司 Method and apparatus for recognising living body based on planar detection, terminal, and storage medium
CN111639582A (en) * 2020-05-26 2020-09-08 清华大学 Living body detection method and apparatus
CN111639582B (en) * 2020-05-26 2023-10-10 清华大学 Living body detection method and equipment

Also Published As

Publication number Publication date
WO2020164284A1 (en) 2020-08-20

Similar Documents

Publication Publication Date Title
CN109977764A (en) Vivo identification method, device, terminal and storage medium based on plane monitoring-network
CN107609383B (en) 3D face identity authentication method and device
CN107748869B (en) 3D face identity authentication method and device
Papazov et al. Real-time 3D head pose and facial landmark estimation from depth images using triangular surface patch features
Romdhani et al. Face recognition using 3-D models: Pose and illumination
CN104978549B (en) Three-dimensional face images feature extracting method and system
Alnajar et al. Calibration-free gaze estimation using human gaze patterns
CN105447441B (en) Face authentication method and device
CN105740780B (en) Method and device for detecting living human face
CN108549886A (en) A kind of human face in-vivo detection method and device
CN107590430A (en) Biopsy method, device, equipment and storage medium
US10121273B2 (en) Real-time reconstruction of the human body and automated avatar synthesis
CN105989331B (en) Face feature extraction element, facial feature extraction method, image processing equipment and image processing method
CN106897675A (en) The human face in-vivo detection method that binocular vision depth characteristic is combined with appearance features
CN106796449A (en) Eye-controlling focus method and device
CN111091075B (en) Face recognition method and device, electronic equipment and storage medium
CN109117755A (en) A kind of human face in-vivo detection method, system and equipment
CN110472582B (en) 3D face recognition method and device based on eye recognition and terminal
CN109086724A (en) A kind of method for detecting human face and storage medium of acceleration
CN113298158B (en) Data detection method, device, equipment and storage medium
CN108062544A (en) For the method and apparatus of face In vivo detection
CN109858433B (en) Method and device for identifying two-dimensional face picture based on three-dimensional face model
CN112633217A (en) Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model
CN110188630A (en) A kind of face identification method and camera
CN106156739A (en) A kind of certificate photo ear detection analyzed based on face mask and extracting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination