CN110222573A - Face identification method, device, computer equipment and storage medium - Google Patents

Face identification method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN110222573A
CN110222573A CN201910375203.1A CN201910375203A CN110222573A CN 110222573 A CN110222573 A CN 110222573A CN 201910375203 A CN201910375203 A CN 201910375203A CN 110222573 A CN110222573 A CN 110222573A
Authority
CN
China
Prior art keywords
image
face
detected
frame
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910375203.1A
Other languages
Chinese (zh)
Other versions
CN110222573B (en
Inventor
庞烨
王健宗
王义文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910375203.1A priority Critical patent/CN110222573B/en
Publication of CN110222573A publication Critical patent/CN110222573A/en
Application granted granted Critical
Publication of CN110222573B publication Critical patent/CN110222573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

This application involves technical field of biometric identification, are used for recognition of face.A kind of face identification method, device, computer equipment and storage medium are specifically disclosed, this method comprises: obtaining image to be detected;Color space conversion is carried out to obtain described image to be detected corresponding target image in pre-set color space to described image to be detected;The local binary patterns characteristic value for extracting the target image carries out statistics with histogram according to the local binary patterns characteristic value to obtain local binary patterns histogram;Obtained local binary patterns histogram is input to trained disaggregated model in advance and carries out classification and Detection to obtain testing result;Facial image identification is carried out according to the testing result.The efficiency and accuracy of recognition of face can be improved in this method, prevents recognition of face spoofing attack.

Description

Face identification method, device, computer equipment and storage medium
Technical field
This application involves technical field of face recognition more particularly to a kind of face identification method, device, computer equipment and Storage medium.
Background technique
Currently, the face recognition technology based on deep learning achieves great breakthrough, scene example under more and more lines Such as school, company, market, airport and railway station, security protection is all carried out using recognition of face, while mobile phone and internet terminal are also adopted User information protection or payment etc. are carried out with recognition of face.However, current recognition of face remains severe safety Potential problem, such as attacker can attack face identification system using user's photochrome or electrical screen, so that it may reach Obtain the purpose of user information.Therefore, the safety for how improving recognition of face becomes technical problem urgently to be resolved.
Summary of the invention
This application provides a kind of face identification method, device, computer equipment and storage mediums, to prevent recognition of face Middle spoofing attack improves the safety of user information.
In a first aspect, this application provides a kind of face identification methods, which comprises
It acquires user and determines that multiframe is to be checked according to the face video of default shooting rule shooting, and from the face video Altimetric image;
Image to be detected described in every frame carries out Face datection processing to obtain the corresponding people of image to be detected described in every frame Face image;
Based on position Hui-Hui calendar network, the facial image described in obtained every frame carries out three-dimensional reconstruction to obtain every institute State the corresponding depth image of facial image;
Matrix of depths is generated according to the corresponding depth image of facial image described in every, the matrix of depths is input to pre- First trained In vivo detection model is detected to obtain testing result;
Facial image identification is carried out to described image to be detected according to the testing result.
Second aspect, present invention also provides a kind of face identification device, described device includes:
Determination unit is acquired, for acquiring user according to the face video of default shooting rule shooting, and from the face Multiframe image to be detected is determined in video;
Detection processing unit, for image to be detected described in every frame carry out Face datection processing with obtain described in every frame to The corresponding facial image of detection image;
Image reconstruction unit carries out Three-dimensional Gravity to obtained every frame facial image for being based on position Hui-Hui calendar network It builds to obtain the corresponding depth image of every facial image;
Input unit is generated, for generating matrix of depths according to the corresponding depth image of facial image described in every, by institute It states matrix of depths and is input in advance trained In vivo detection model and detected to obtain testing result;
Image identification unit, for carrying out facial image identification to described image to be detected according to the testing result.
The third aspect, present invention also provides a kind of computer equipment, the computer equipment includes memory and processing Device;The memory is for storing computer program;The processor, for executing the computer program and described in the execution Such as above-mentioned face identification method is realized when computer program.
Fourth aspect, present invention also provides a kind of computer readable storage medium, the computer readable storage medium It is stored with computer program, the computer program makes the processor realize such as above-mentioned recognition of face when being executed by processor Method.
This application discloses a kind of face identification method, device, equipment and storage mediums, pass through the face from acquisition user Multiframe image to be detected is determined in video;Obtain the corresponding facial image of image to be detected described in every frame;It is returned and is reflected based on position Network is penetrated, the facial image described in obtained every frame carries out three-dimensional reconstruction to obtain the corresponding depth map of every facial image Picture;Matrix of depths is generated according to the corresponding depth image of facial image described in every and is input to the matrix of depths in advance Trained In vivo detection model is detected to obtain testing result;Described image to be detected is carried out according to the testing result Facial image identification.The efficiency and accuracy of recognition of face can be improved in this method, prevents recognition of face spoofing attack.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application, below will be to needed in embodiment description Attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is some embodiments of the present application, general for this field For logical technical staff, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of schematic flow diagram for face identification method that embodiments herein provides;
Fig. 2 is the sub-step schematic flow diagram of the face identification method in Fig. 1;
Fig. 3 is the sub-step schematic flow diagram of the face identification method in Fig. 1;
Fig. 4 is the sub-step schematic flow diagram of the face identification method in Fig. 1;
Fig. 5 a is the effect diagram that a kind of depth image that embodiments herein provides is rebuild;
Fig. 5 b is the effect diagram that a kind of depth image that embodiments herein provides is rebuild;
Fig. 6 is the schematic diagram of the network structure for the DenseNet network that embodiments herein provides;
Fig. 7 is a kind of schematic block diagram for face identification device that embodiments herein provides;
Fig. 8 is a kind of structural representation block diagram for computer equipment that embodiments herein provides.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiment is some embodiments of the present application, instead of all the embodiments.Based on this Shen Please in embodiment, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall in the protection scope of this application.
Flow chart shown in the drawings only illustrates, it is not necessary to including all content and operation/step, also not It is that must be executed by described sequence.For example, some operation/steps can also decompose, combine or partially merge, therefore practical The sequence of execution is possible to change according to the actual situation.
Embodiments herein provides a kind of face identification method, face identification device, computer equipment and storage and is situated between Matter.Wherein, which can be applied in terminal or server, accurately to identify the face verification letter of user Breath, prevents spoofing attack in recognition of face, and then improve the safety of user information.
For example, face identification method can be used for unlock identification, payment identification or Information Authentication of mobile terminal etc., may be used also With the unlock identification being applied in gate inhibition, other similar field can also be applied certainly.
Wherein, server can be independent server, or server cluster.The terminal can be mobile phone, put down The electronic equipments such as plate computer, laptop, desktop computer, personal digital assistant and wearable device.
With reference to the accompanying drawing, it elaborates to some embodiments of the application.In the absence of conflict, following Feature in embodiment and embodiment can be combined with each other.
Referring to Fig. 1, Fig. 1 is a kind of schematic flow diagram for face identification method that embodiments herein provides.Such as Fig. 1 Shown, which specifically includes step S101 to S105.
The face video that S101, acquisition user shoot according to default shooting rule, and determination is more from the face video Frame image to be detected.
Wherein, shooting rule is preset for informing that user shoots the rule of video, and the default shooting rule is such as are as follows: prompt User keeps positive face posture certain time, to shoot the face image of user;Then turn left at a slow speed, shooting user turn left to The facial image of process and right face image;It turns right at a slow speed again, shooting user turns right facial image and a left side to process Face image.
In one embodiment, in order to quickly determine image to be detected and improve recognition of face efficiency, as shown in Fig. 2, That is step S101 specifically includes the following contents:
S101a, video acquisition prompt information is exported by terminal, include default shooting in the video acquisition prompt information Rule;The face video that S101b, acquisition user shoot according to the default shooting rule, between being determined in the face video Every default frame number image as image to be detected.
Specifically, 2 frames, 4 frames or other frame numbers can be spaced by being spaced default frame number, naturally it is also possible to determine the face video In each frame image as image to be detected.Or according to the figure in the particular sample frequency collection of the video face video frame As being used as image to be detected.By to the corresponding image for being spaced default frame number of setting, can provide recognition of face speed and Accuracy.
In one embodiment, as shown in figure 3, the step of determining multiframe image to be detected from the face video, tool Body includes the following contents:
S1011, when selecting user's positive face shooting from the face video corresponding video frame images as benchmark image; The characteristic matching point of S1012, other frame images in the extraction face video and the benchmark image;S1013, according to Characteristic matching point calculates the similarity of other frame images and the benchmark image in the face video;S1014, judgement calculate Whether obtained similarity is greater than preset threshold;If S1015, the similarity are greater than preset threshold, the similarity pair is determined The frame video image answered is image to be detected.
Specifically, the characteristic matching of other frame images and the benchmark image in face video can be extracted with Sift algorithm Point, for example the Sift feature of the 20th frame video image in benchmark image and face video is extracted, it will be in two width video images Sift feature is matched, with the quantity of statistical nature match point;It can be calculated according to the quantity of statistical nature match point similar It spends, for example the quantity of the characteristic matching point come out is 800, total characteristic point is 1000, then similarity is 80%, Two width videos can certainly be detected using the Sift feature in two width video image of block statistics, such as using frame technology Image face frame only counts the characteristic matching point in the face frame.If preset threshold is set as 50%, can determine The 20th frame video image is image to be detected in face video.
It is corresponding to obtain image to be detected described in every frame that S102, image to be detected described in every frame carry out Face datection processing Facial image.
Specifically, use Face datection image processing techniques image to be detected described in every frame carry out Face datection processing with Obtain the corresponding facial image of image to be detected described in every frame.
In one embodiment, as shown in figure 4, step S102 specifically includes the following contents:
S102a, Face datection is carried out with to be checked described in the every frame of determination by Dlib tool image to be detected described in every frame Human face region in altimetric image;S102b, described image to be detected is cut according to the human face region to be cut out every frame The corresponding facial image of described image to be detected.
Specifically, Dlib tool is called, the face in the image to be detected is detected to examine by the Dlib tool Human face characteristic point is measured, determines human face region further according to the human face characteristic point;So as to according to the human face region to it is described to Detection image is cut to be cut out the corresponding facial image of image to be detected described in every frame.
It should be noted that other tools, which also can be used, carries out Face datection, for example OpenCV tool can also be used In Face datection function carry out Face datection;It is, of course, also possible to using other modes, for example facial image recognition method carries out Face datection etc..
S103, it is based on position Hui-Hui calendar network, the facial image described in obtained every frame carries out three-dimensional reconstruction to obtain The corresponding depth image of every facial image.
Specifically, right using position Hui-Hui calendar network (Position map Regression Network, PRNet) It obtains facial image and carries out three-dimensional reconstruction, to obtain corresponding depth image, which is the 3-D image for being reconstruction.
Wherein, three-dimensional (3D) method for reconstructing of the PRNet is the method for UV figure, can make 3D human face structure and alignment information It is recorded, therefore the facial image of 3D can be expressed using the image of 2D with 2D.It is possible thereby to be based on this PRnet pairs Facial image described in obtained every frame carries out three-dimensional reconstruction to obtain the corresponding depth image of every facial image.
Wherein, three-dimensional (3D) method for reconstructing of PRNet needs to regard 3D model projection to 2D image as an amblyopia angular projection, When projecting to x-y plane, ground truth 3D face points cloud is accurately matched with the face in 2D image, then uses left hand flute card Your coordinate system defines the position of 3D face.Pos (u, v)=(x, y, z), wherein (u, v) represents UV coordinate system face plane Key point, (x, y, z) represent the key point of 3D human face structure, wherein (x, y) represents the position of 2D face, z represents depth information. So the value of use r, g, b that the location drawing will be readily appreciated that replace original x, the value of y, z.
It is possible thereby to which the two-dimensional facial image of every frame is carried out three-dimensional reconstruction to obtain every face figure using PRnet As corresponding depth image.As shown in figure 5a and 5b, Fig. 5 a is the facial image of input and the 3D point of corresponding face alignment information Cloud;B1 in Fig. 5 b is the 2D image of input, and b2 and b3 are respectively the UV texture maps extracted and the corresponding UV location drawing;b4, B5 and b6 is respectively corresponding 3 channel datas of x, y and the z of the UV location drawing, and wherein 3 channel datas are every people The corresponding depth image of face image.
In addition, the error distribution of the corresponding image of different depth images is different, the small both sides of error are missed between positive face Poor big, if face is to the left, the error of left face will be less than the right, similarly right face.Thus in the present embodiment, using face video In multiple depth images constitute matrix of depths detected, thus can be effectively prevented from attacker can using user's colour shine The attack of piece or electrical screen.
S104, matrix of depths is generated according to the corresponding depth image of facial image described in every, the matrix of depths is defeated Enter to In vivo detection model trained in advance and is detected to obtain testing result.
Wherein, the In vivo detection model trained in advance is two disaggregated models based on DenseNet network;It is described The weight of DenseNet Web vector graphic ImageNet pre-training model, and two classification are carried out using Softmax function.By changing The precision of detection can be improved in network after.
Wherein, described that matrix of depths is generated according to the corresponding depth image of facial image described in every, comprising: by every institute The corresponding depth image of facial image is stated to save into column vector;And it is constituted according to the corresponding column vector of depth image described in multiple deep Spend matrix.
Specifically every depth image is pulled into a column vector, then the column vector that multiple depth images are constituted is spelled The matrix of depths of building is input to Densenet network as input by one matrix (matrix of depths) of the composition that is connected together. Since the Densenet is two sorter networks, the output result of two sorter networks includes passing through and not passing through, i.e. the testing result For two classification results.
For example, if Densenet network output the result is that 0, then it represents that image to be detected it is corresponding be living body image; If Densenet network output the result is that 1, then it represents that image to be detected it is corresponding be non-living body image.By constituting matrix Form be input to In vivo detection model, thereby reduce the operand to model running equipment, inspection can be further improved Degree of testing the speed and accuracy.
In the present embodiment, specifically using improved DenseNet as base classifier, network structure such as Fig. 6 institute Show, which improved based on ResNet, and DenseNet is a kind of with the convolutional neural networks intensively connected, In the DenseNet network, there is connection between any two layers, the input of each layer network is all the output of all layers of front Set, while this layer of all characteristic patterns learnt can all be passed to and be used as input for all layers below.Wherein in each block The structure in face and ResNet are almost the same, reduce parameter amount using bottleneck layer, and DenseNet includes multiple block shapes simultaneously At whole network.Although apparently the huge parameter amount of bring, its computational efficiency are but greatly better than it for intensive connection His network, main advantage are exactly that every layer of calculation amount is reduced and the recycling of feature.DenseNet is exactly to allow every layer defeated Enter to directly influence subsequent all layers, then again merge characteristic pattern before using channel, all includes due to every layer Front layer information, therefore only need seldom characteristic pattern with regard to much of that.This Dense connection is equivalent to every layer and has One input and loss, therefore the case where gradient disappears can be effectively reduced.It is also to be stressed that simultaneously, Dense connection It exists only in inside block, is not connected between each block.
In addition, in In vivo detection model, using the good weight of ImageNet pre-training, by 1000 classes of the last layer It is not converted into 2 classifications and carries out transfer learning, while softmax function is changed to sigmoid function and carries out two classification, to reach To the purpose of In vivo detection.
S105, facial image identification is carried out to described image to be detected according to the testing result.
Specifically, described that facial image identification is carried out to described image to be detected according to the testing result, comprising: if institute Stating testing result is living body image, carries out image recognition to the facial image in described image to be detected;If the testing result For non-living body image, authentication failed information is exported.
Wherein, the facial image in described image to be detected carries out image recognition, comprising: selects any one frame institute It states the facial image in image to be detected and identification is compared with face characteristic gathered in advance in the facial image of selection.
Specifically, wherein it is living body image and two kinds of non-living body image as a result, root that above-mentioned testing result, which includes facial image, Different identification is carried out according to different testing results.For example, if the testing result be non-living body image, not to facial image into Row identifies and exports authentication failed information;For another example, if testing result is living body image, image knowledge is carried out to facial image Not, it and is exported after image recognition passes through and is proved to be successful information.
Certainly, if testing result is living body image, image recognition is carried out to facial image, it can be according in facial image Human face characteristic point identification is compared, to improve the speed of recognition of face, and then guarantee real-time.And if testing result is non- Living body image can also carry out depth recognition to the facial image in image to be detected according to the human face image information acquired in advance Analysis, and obtain analysis result.If analysis can choose default authentication policy and export prompt letter the result is that registration user Breath is to prompt user to be verified according to default authentication policy.If should go out analysis result is not registration user, to face figure Enter position and lock state as being identified and exporting authentication failed information and control equipment.
Face identification method provided by the above embodiment, by determining that multiframe is to be detected from the face video of acquisition user Image;Obtain the corresponding facial image of image to be detected described in every frame;Based on position Hui-Hui calendar network, to obtained every frame institute It states facial image and carries out three-dimensional reconstruction to obtain the corresponding depth image of every facial image;According to face figure described in every As corresponding depth image generate matrix of depths and by the matrix of depths be input in advance trained In vivo detection model into Row detection is to obtain testing result;Facial image identification is carried out to described image to be detected according to the testing result.This method The efficiency and accuracy that recognition of face can be improved, prevent recognition of face spoofing attack.And it can online online lower same luck Row guarantees real-time.
Referring to Fig. 7, Fig. 7 is that embodiments herein also provides the schematic block diagram of another face identification device, it should Face identification device is for executing face identification method above-mentioned.Wherein, the face identification device can be configured at server or In terminal.
As shown in fig. 7, the face identification device 300, comprising: acquisition determination unit 301, detection processing unit 302, image Reconstruction unit 303 generates input unit 304 and image identification unit 305.
Determination unit 301 is acquired, for acquiring user according to the face video of default shooting rule shooting, and from the people Multiframe image to be detected is determined in face video.
In one embodiment, determination unit 301 is acquired, is specifically used for: video acquisition prompt information is exported by terminal, It include default shooting rule in the video acquisition prompt information;User is acquired according to the face of the default shooting rule shooting Video determines the image for being spaced default frame number as image to be detected from the face video.
In another embodiment, determination unit 301 is acquired, comprising: image selection subelement 3011, feature extraction are single Member 3012, similar computation subunit 3013, similar judgment sub-unit 3014 and image determine subelement 3015.
Wherein, image selection subelement 3011, for selecting user's positive face corresponding when shooting from the face video Video frame images are as benchmark image;Feature extraction subelement 3012, for extracting other frame images in the face video With the characteristic matching point of the benchmark image;Similar computation subunit 3013, for according to characteristic matching point calculating The similarity of other frame images and the benchmark image in face video;Similar judgment sub-unit 3014 is calculated for judging Whether obtained similarity is greater than preset threshold;Image determines subelement 3015, if being greater than preset threshold for the similarity, Determine that the corresponding frame video image of the similarity is image to be detected.
Detection processing unit 302 carries out Face datection processing for image to be detected described in every frame to obtain every frame institute State the corresponding facial image of image to be detected.
In one embodiment, detection processing unit 302 includes: and detects to determine that subelement 3021 and cutting cut subelement 3022。
Detect determine subelement 3021, for by Dlib tool image to be detected described in every frame carry out Face datection with Determine the human face region in image to be detected described in every frame;Cutting cuts subelement 3022, for according to the human face region pair Described image to be detected is cut to be cut out the corresponding facial image of image to be detected described in every frame.
Image reconstruction unit 303 carries out obtained every frame facial image three-dimensional for being based on position Hui-Hui calendar network It rebuilds to obtain the corresponding depth image of every facial image.
Input unit 304 is generated, it, will for generating matrix of depths according to the corresponding depth image of facial image described in every The matrix of depths is input to In vivo detection model trained in advance and is detected to obtain testing result.
Image identification unit 305, for carrying out facial image identification to described image to be detected according to the testing result.
It should be noted that it is apparent to those skilled in the art that, for convenience of description and succinctly, The device of foregoing description and the specific work process of each unit, can refer to corresponding processes in the foregoing method embodiment, herein It repeats no more.
Above-mentioned device can be implemented as a kind of form of computer program, which can be as shown in Figure 8 Computer equipment on run.
Referring to Fig. 8, Fig. 8 is a kind of structural representation block diagram for computer equipment that embodiments herein provides.It should Computer equipment can be server or terminal.
Refering to Fig. 8, which includes processor, memory and the network interface connected by system bus, In, memory may include non-volatile memory medium and built-in storage.
Non-volatile memory medium can storage program area and computer program.The computer program includes program instruction, The program instruction is performed, and processor may make to execute any one face identification method.
Processor supports the operation of entire computer equipment for providing calculating and control ability.
Built-in storage provides environment for the operation of the computer program in non-volatile memory medium, the computer program quilt When processor executes, processor may make to execute any one face identification method.
The network interface such as sends the task dispatching of distribution for carrying out network communication.It will be understood by those skilled in the art that Structure shown in Fig. 8, only the block diagram of part-structure relevant to application scheme, is not constituted to application scheme institute The restriction for the computer equipment being applied thereon, specific computer equipment may include than more or fewer portions as shown in the figure Part perhaps combines certain components or with different component layouts.
It should be understood that processor can be central processing unit (Central Processing Unit, CPU), it should Processor can also be other general processors, digital signal processor (Digital Signal Processor, DSP), specially With integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor are patrolled Collect device, discrete hardware components etc..Wherein, general processor can be microprocessor or the processor be also possible to it is any often The processor etc. of rule.
Wherein, in one embodiment, the processor is for running computer program stored in memory, with reality Existing following steps:
It acquires user and determines that multiframe is to be checked according to the face video of default shooting rule shooting, and from the face video Altimetric image;Image to be detected described in every frame carries out Face datection processing to obtain the corresponding face of image to be detected described in every frame Image;Based on position Hui-Hui calendar network, the facial image described in obtained every frame carries out three-dimensional reconstruction to obtain described in every The corresponding depth image of facial image;Matrix of depths is generated according to the corresponding depth image of facial image described in every, it will be described Matrix of depths is input to In vivo detection model trained in advance and is detected to obtain testing result;According to the testing result pair Described image to be detected carries out facial image identification.
In one embodiment, the processor is realizing the acquisition user according to the face of default shooting rule shooting Video, and from the face video when determining multiframe image to be detected, for realizing:
Video acquisition prompt information is exported by terminal, includes default shooting rule in the video acquisition prompt information; User is acquired according to the face video of the default shooting rule shooting, is determined from the face video and is spaced default frame number Image is as image to be detected.
In one embodiment, the processor is realizing described multiframe image to be detected determining from the face video When, for realizing:
Corresponding video frame images are as benchmark image when selecting user's positive face shooting from the face video;Extract institute State the characteristic matching point of other frame images and the benchmark image in face video;According to characteristic matching point calculating The similarity of other frame images and the benchmark image in face video;If the similarity is greater than preset threshold, institute is determined Stating the corresponding frame video image of similarity is image to be detected.
In one embodiment, the processor is being realized at the progress of image to be detected described in the every frame Face datection When reason is to obtain the corresponding facial image of image to be detected described in every frame, for realizing:
Face datection is carried out by Dlib tool image to be detected described in every frame with image to be detected described in the every frame of determination In human face region;Described image to be detected is cut according to the human face region to be cut out mapping to be checked described in every frame As corresponding facial image.
In one embodiment, the In vivo detection model trained in advance is the two classification moulds based on DenseNet network Type;The weight of the DenseNet Web vector graphic ImageNet pre-training model, and two points are carried out using Softmax function Class.
In one embodiment, the processor realize it is described according to the testing result to described image to be detected into When pedestrian's face image identifies, for realizing:
If the testing result is living body image, image recognition is carried out to the facial image in described image to be detected;If The testing result is non-living body image, exports authentication failed information.
In one embodiment, the processor carries out figure in the realization facial image in described image to be detected When as identification, for realizing:
It selects the facial image in image to be detected described in any one frame and acquires the facial image of selection with preparatory Face characteristic identification is compared.
A kind of computer readable storage medium is also provided in embodiments herein, the computer readable storage medium is deposited Computer program is contained, includes program instruction in the computer program, the processor executes described program instruction, realizes this Apply for any one face identification method that embodiment provides.
Wherein, the computer readable storage medium can be the storage inside of computer equipment described in previous embodiment Unit, such as the hard disk or memory of the computer equipment.The computer readable storage medium is also possible to the computer The plug-in type hard disk being equipped on the External memory equipment of equipment, such as the computer equipment, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card) etc..
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any Those familiar with the art within the technical scope of the present application, can readily occur in various equivalent modifications or replace It changes, these modifications or substitutions should all cover within the scope of protection of this application.Therefore, the protection scope of the application should be with right It is required that protection scope subject to.

Claims (10)

1. a kind of face identification method characterized by comprising
User is acquired according to the face video of default shooting rule shooting, and determines multiframe mapping to be checked from the face video Picture;
Image to be detected described in every frame carries out Face datection processing to obtain the corresponding face figure of image to be detected described in every frame Picture;
Based on position Hui-Hui calendar network, the facial image described in obtained every frame carries out three-dimensional reconstruction to obtain every people The corresponding depth image of face image;
Matrix of depths is generated according to the corresponding depth image of facial image described in every, the matrix of depths is input to preparatory instruction Experienced In vivo detection model is detected to obtain testing result;
Facial image identification is carried out to described image to be detected according to the testing result.
2. face identification method according to claim 1, which is characterized in that described to determine multiframe from the face video Image to be detected, comprising:
Corresponding video frame images are as benchmark image when selecting user's positive face shooting from the face video;
Extract the characteristic matching point of other frame images and the benchmark image in the face video;
The similarity of other frame images and the benchmark image in the face video is calculated according to the characteristic matching point;
If the similarity is greater than preset threshold, determine that the corresponding frame video image of the similarity is image to be detected.
3. face identification method according to claim 1, which is characterized in that the acquisition user is according to default shooting rule The face video of shooting, and multiframe image to be detected is determined from the face video, comprising:
Video acquisition prompt information is exported by terminal, includes default shooting rule in the video acquisition prompt information;
User is acquired according to the face video of the default shooting rule shooting, is determined from the face video and is spaced default frame Several images are as image to be detected.
4. face identification method according to claim 1, which is characterized in that described image to be detected described in every frame carries out Face datection processing is to obtain the corresponding facial image of image to be detected described in every frame, comprising:
Face datection is carried out in image to be detected described in the every frame of determination by Dlib tool image to be detected described in every frame Human face region;
Described image to be detected is cut according to the human face region corresponding to be cut out image to be detected described in every frame Facial image.
5. face identification method according to claim 1, which is characterized in that the In vivo detection model trained in advance is Two disaggregated models based on DenseNet network;The weight of the DenseNet Web vector graphic ImageNet pre-training model, with And two classification are carried out using Softmax function.
6. face identification method according to claim 1, which is characterized in that it is described according to the testing result to it is described to Detection image carries out facial image identification, comprising:
If the testing result is living body image, image recognition is carried out to the facial image in described image to be detected;
If the testing result is non-living body image, authentication failed information is exported.
7. face identification method according to claim 6, which is characterized in that the face in described image to be detected Image carries out image recognition, comprising:
Select facial image in image to be detected described in any one frame and by the facial image of selection and people gathered in advance Identification is compared in face feature.
8. a kind of face identification device characterized by comprising
Determination unit is acquired, for acquiring user according to the face video of default shooting rule shooting, and from the face video Middle determining multiframe image to be detected;
It is to be detected described in every frame to obtain to carry out Face datection processing for image to be detected described in every frame for detection processing unit The corresponding facial image of image;
Image reconstruction unit, for being based on position Hui-Hui calendar network, to obtained every frame facial image carry out three-dimensional reconstruction with Obtain the corresponding depth image of every facial image;
Input unit is generated, for generating matrix of depths according to the corresponding depth image of facial image described in every, by the depth Degree Input matrix to In vivo detection model trained in advance is detected to obtain testing result;
Image identification unit, for carrying out facial image identification to described image to be detected according to the testing result.
9. a kind of computer equipment, which is characterized in that the computer equipment includes memory and processor;
The memory is for storing computer program;
The processor, for executing the computer program and realization such as claim 1 when executing the computer program To face identification method described in any one of 7.
10. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage has computer journey Sequence, the computer program make the processor realize the people as described in any one of claims 1 to 7 when being executed by processor Face recognition method.
CN201910375203.1A 2019-05-07 2019-05-07 Face recognition method, device, computer equipment and storage medium Active CN110222573B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910375203.1A CN110222573B (en) 2019-05-07 2019-05-07 Face recognition method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910375203.1A CN110222573B (en) 2019-05-07 2019-05-07 Face recognition method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110222573A true CN110222573A (en) 2019-09-10
CN110222573B CN110222573B (en) 2024-05-28

Family

ID=67820561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910375203.1A Active CN110222573B (en) 2019-05-07 2019-05-07 Face recognition method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110222573B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110757477A (en) * 2019-10-31 2020-02-07 昆山市工研院智能制造技术有限公司 Height and orientation self-adaptive adjusting method of accompanying robot and accompanying robot
CN110826486A (en) * 2019-11-05 2020-02-21 拉卡拉支付股份有限公司 Face recognition auxiliary detection method and device
CN110866454A (en) * 2019-10-23 2020-03-06 智慧眼科技股份有限公司 Human face living body detection method and system and computer readable storage medium
CN111626166A (en) * 2020-05-19 2020-09-04 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111639553A (en) * 2020-05-14 2020-09-08 青岛联合创智科技有限公司 Preparation method of customized mask device based on visual three-dimensional reconstruction
CN111666884A (en) * 2020-06-08 2020-09-15 睿云联(厦门)网络通讯技术有限公司 Living body detection method, living body detection device, computer-readable medium, and electronic apparatus
CN111723679A (en) * 2020-05-27 2020-09-29 上海五零盛同信息科技有限公司 Face and voiceprint authentication system and method based on deep migration learning
CN112613457A (en) * 2020-12-29 2021-04-06 招联消费金融有限公司 Image acquisition mode detection method and device, computer equipment and storage medium
CN112785687A (en) * 2021-01-25 2021-05-11 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and readable storage medium
CN112818915A (en) * 2021-02-25 2021-05-18 华南理工大学 Depth counterfeit video detection method and system based on 3DMM soft biological characteristics
CN115082991A (en) * 2022-06-27 2022-09-20 平安银行股份有限公司 Face living body detection method and device and electronic equipment
CN115082995A (en) * 2022-06-27 2022-09-20 平安银行股份有限公司 Face living body detection method and device and electronic equipment
WO2024131291A1 (en) * 2022-12-22 2024-06-27 腾讯科技(深圳)有限公司 Face liveness detection method and apparatus, device, and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005025404A2 (en) * 2003-09-08 2005-03-24 Vanderbilt University Apparatus and methods of cortical surface registration and deformation tracking for patient-to-image alignment in relation to image-guided surgery
CN101996416A (en) * 2009-08-24 2011-03-30 三星电子株式会社 3D face capturing method and equipment
CN105474263A (en) * 2013-07-08 2016-04-06 高通股份有限公司 Systems and methods for producing a three-dimensional face model
CN105718863A (en) * 2016-01-15 2016-06-29 北京海鑫科金高科技股份有限公司 Living-person face detection method, device and system
CN108062544A (en) * 2018-01-19 2018-05-22 百度在线网络技术(北京)有限公司 For the method and apparatus of face In vivo detection
CN108319901A (en) * 2018-01-17 2018-07-24 百度在线网络技术(北京)有限公司 Biopsy method, device, computer equipment and the readable medium of face
CN108764024A (en) * 2018-04-09 2018-11-06 平安科技(深圳)有限公司 Generating means, method and the computer readable storage medium of human face recognition model
CN109086691A (en) * 2018-07-16 2018-12-25 阿里巴巴集团控股有限公司 A kind of three-dimensional face biopsy method, face's certification recognition methods and device
CN109086718A (en) * 2018-08-02 2018-12-25 深圳市华付信息技术有限公司 Biopsy method, device, computer equipment and storage medium
CN109117755A (en) * 2018-07-25 2019-01-01 北京飞搜科技有限公司 A kind of human face in-vivo detection method, system and equipment
US20190026606A1 (en) * 2017-07-20 2019-01-24 Beijing Baidu Netcom Science And Technology Co., Ltd. To-be-detected information generating method and apparatus, living body detecting method and apparatus, device and storage medium
CN109508702A (en) * 2018-12-29 2019-03-22 安徽云森物联网科技有限公司 A kind of three-dimensional face biopsy method based on single image acquisition equipment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005025404A2 (en) * 2003-09-08 2005-03-24 Vanderbilt University Apparatus and methods of cortical surface registration and deformation tracking for patient-to-image alignment in relation to image-guided surgery
CN101996416A (en) * 2009-08-24 2011-03-30 三星电子株式会社 3D face capturing method and equipment
CN105474263A (en) * 2013-07-08 2016-04-06 高通股份有限公司 Systems and methods for producing a three-dimensional face model
CN105718863A (en) * 2016-01-15 2016-06-29 北京海鑫科金高科技股份有限公司 Living-person face detection method, device and system
US20190026606A1 (en) * 2017-07-20 2019-01-24 Beijing Baidu Netcom Science And Technology Co., Ltd. To-be-detected information generating method and apparatus, living body detecting method and apparatus, device and storage medium
CN108319901A (en) * 2018-01-17 2018-07-24 百度在线网络技术(北京)有限公司 Biopsy method, device, computer equipment and the readable medium of face
CN108062544A (en) * 2018-01-19 2018-05-22 百度在线网络技术(北京)有限公司 For the method and apparatus of face In vivo detection
CN108764024A (en) * 2018-04-09 2018-11-06 平安科技(深圳)有限公司 Generating means, method and the computer readable storage medium of human face recognition model
CN109086691A (en) * 2018-07-16 2018-12-25 阿里巴巴集团控股有限公司 A kind of three-dimensional face biopsy method, face's certification recognition methods and device
CN109117755A (en) * 2018-07-25 2019-01-01 北京飞搜科技有限公司 A kind of human face in-vivo detection method, system and equipment
CN109086718A (en) * 2018-08-02 2018-12-25 深圳市华付信息技术有限公司 Biopsy method, device, computer equipment and storage medium
CN109508702A (en) * 2018-12-29 2019-03-22 安徽云森物联网科技有限公司 A kind of three-dimensional face biopsy method based on single image acquisition equipment

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866454A (en) * 2019-10-23 2020-03-06 智慧眼科技股份有限公司 Human face living body detection method and system and computer readable storage medium
CN110757477A (en) * 2019-10-31 2020-02-07 昆山市工研院智能制造技术有限公司 Height and orientation self-adaptive adjusting method of accompanying robot and accompanying robot
CN110826486A (en) * 2019-11-05 2020-02-21 拉卡拉支付股份有限公司 Face recognition auxiliary detection method and device
CN111639553A (en) * 2020-05-14 2020-09-08 青岛联合创智科技有限公司 Preparation method of customized mask device based on visual three-dimensional reconstruction
CN111626166B (en) * 2020-05-19 2023-06-09 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN111626166A (en) * 2020-05-19 2020-09-04 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111723679A (en) * 2020-05-27 2020-09-29 上海五零盛同信息科技有限公司 Face and voiceprint authentication system and method based on deep migration learning
CN111723679B (en) * 2020-05-27 2024-09-27 上海五零盛同信息科技有限公司 Face and voiceprint authentication system and method based on deep migration learning
CN111666884A (en) * 2020-06-08 2020-09-15 睿云联(厦门)网络通讯技术有限公司 Living body detection method, living body detection device, computer-readable medium, and electronic apparatus
CN111666884B (en) * 2020-06-08 2023-08-25 睿云联(厦门)网络通讯技术有限公司 Living body detection method, living body detection device, computer readable medium and electronic equipment
CN112613457A (en) * 2020-12-29 2021-04-06 招联消费金融有限公司 Image acquisition mode detection method and device, computer equipment and storage medium
CN112613457B (en) * 2020-12-29 2024-04-09 招联消费金融股份有限公司 Image acquisition mode detection method, device, computer equipment and storage medium
CN112785687A (en) * 2021-01-25 2021-05-11 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and readable storage medium
CN112818915A (en) * 2021-02-25 2021-05-18 华南理工大学 Depth counterfeit video detection method and system based on 3DMM soft biological characteristics
CN115082991A (en) * 2022-06-27 2022-09-20 平安银行股份有限公司 Face living body detection method and device and electronic equipment
CN115082995A (en) * 2022-06-27 2022-09-20 平安银行股份有限公司 Face living body detection method and device and electronic equipment
WO2024131291A1 (en) * 2022-12-22 2024-06-27 腾讯科技(深圳)有限公司 Face liveness detection method and apparatus, device, and storage medium

Also Published As

Publication number Publication date
CN110222573B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN110222573A (en) Face identification method, device, computer equipment and storage medium
CN108351961B (en) Biological recognition system and computer implemented method based on image
CN106897658B (en) Method and device for identifying human face living body
Shen et al. Exemplar-based human action pose correction and tagging
CN108229325A (en) Method for detecting human face and system, electronic equipment, program and medium
CN112381782B (en) Human face image quality evaluation method and device, computer equipment and storage medium
CN107590430A (en) Biopsy method, device, equipment and storage medium
CN110147721A (en) A kind of three-dimensional face identification method, model training method and device
CN114758362B (en) Clothing changing pedestrian re-identification method based on semantic perception attention and visual shielding
CN107545241A (en) Neural network model is trained and biopsy method, device and storage medium
CN110490238A (en) A kind of image processing method, device and storage medium
CN108229324A (en) Gesture method for tracing and device, electronic equipment, computer storage media
CN111598051B (en) Face verification method, device, equipment and readable storage medium
CN108319901A (en) Biopsy method, device, computer equipment and the readable medium of face
TW201030630A (en) Hand gesture recognition system and method
WO2021151313A1 (en) Method and apparatus for document forgery detection, electronic device, and storage medium
CN109670517A (en) Object detection method, device, electronic equipment and target detection model
CN112132812B (en) Certificate verification method and device, electronic equipment and medium
CN107886110A (en) Method for detecting human face, device and electronic equipment
CN114973349A (en) Face image processing method and training method of face image processing model
US11361467B2 (en) Pose selection and animation of characters using video data and training techniques
CN113011326A (en) Image processing method, image processing device, storage medium and computer equipment
CN114917590B (en) Virtual reality game system
CN116978130A (en) Image processing method, image processing device, computer device, storage medium, and program product
CN115294162A (en) Target identification method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant