CN110059645A - A kind of face identification method, system and electronic equipment and storage medium - Google Patents

A kind of face identification method, system and electronic equipment and storage medium Download PDF

Info

Publication number
CN110059645A
CN110059645A CN201910329195.7A CN201910329195A CN110059645A CN 110059645 A CN110059645 A CN 110059645A CN 201910329195 A CN201910329195 A CN 201910329195A CN 110059645 A CN110059645 A CN 110059645A
Authority
CN
China
Prior art keywords
face
face picture
characteristic point
picture
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910329195.7A
Other languages
Chinese (zh)
Inventor
陈鑫
赵明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Intelligent Intelligence Information Technology Co Ltd
Original Assignee
Hangzhou Intelligent Intelligence Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Intelligent Intelligence Information Technology Co Ltd filed Critical Hangzhou Intelligent Intelligence Information Technology Co Ltd
Priority to CN201910329195.7A priority Critical patent/CN110059645A/en
Publication of CN110059645A publication Critical patent/CN110059645A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

This application discloses a kind of face identification method, system and a kind of electronic equipment and computer readable storage mediums, this method comprises: obtaining training set, and the characteristic point position of human face region in each of described training set face picture is detected using characteristic point regression model;Wherein, the face picture is specially the face picture of marked recognition result;Strategy is enclosed using preset expansion according to the characteristic point position each face picture is carried out to expand circle processing;The face picture training deep learning model completed using circle is expanded, so that the deep learning model completed using training identifies target face picture.It can be seen that face identification method provided by the present application, improves the accuracy of recognition of face.

Description

A kind of face identification method, system and electronic equipment and storage medium
Technical field
This application involves technical field of face recognition, more specifically to a kind of face identification method, system and one kind Electronic equipment and a kind of computer readable storage medium.
Background technique
Retail trade shops (such as clothes shop, shop) is needed to count to passenger flow attributive analysis, main includes statistics Gender and age distribution into the shop stream of people, and formulate corresponding sales tactics.Face picture is acquired by picture pick-up device, and is identified The gender and age information of the face picture.
In the prior art, using face picture group at training set training machine learning model, pass through training complete Learning model identifies input face picture.But in actual application scenarios, camera be generally placed in apart from people compared with Remote place, by the light of actual environment, complex background etc. influences, and the smaller and resolution ratio of the face of shooting is not high.Based on this The accuracy that the learning model that training obtains identifies face picture is lower.
Therefore, how to improve the accuracy of recognition of face is those skilled in the art's technical issues that need to address.
Summary of the invention
The application be designed to provide a kind of face identification method, system and a kind of electronic equipment and a kind of computer can Storage medium is read, the accuracy of recognition of face is improved.
To achieve the above object, this application provides a kind of face identification methods, comprising:
Training set is obtained, and detects face area in each of training set face picture using characteristic point regression model The characteristic point position in domain;Wherein, the face picture is specially the face picture of marked recognition result;
Strategy is enclosed using preset expansion according to the characteristic point position each face picture is carried out to expand circle processing;
The face picture training deep learning model completed using circle is expanded, so as to the deep learning model completed using training Identify target face picture.
Wherein, the deep learning model completed using training identifies target face picture, comprising:
When receiving the target face picture, the target face picture is detected using the characteristic point regression model The characteristic point position of middle human face region;
Strategy is enclosed using the expansion according to the characteristic point position target face picture is carried out to expand circle processing;
The image after enclosing will be expanded to input in the deep learning model that the training is completed, it is corresponding to obtain the original image Face recognition result.
Wherein, described that each face picture is expanded using preset expansion circle strategy according to the characteristic point position After circle processing, further includes:
The size for adjusting each face picture is target size.
Wherein, the face picture is specially the face picture at marked gender and/or age.
Wherein, the deep learning model includes mobilefacenet learning model.
Wherein, the characteristic point regression model includes 68 characteristic point regression model of dlib.
Wherein, described that each face picture is expanded using preset expansion circle strategy according to the characteristic point position Circle processing, comprising:
Two spacing are determined according to the characteristic point position in each face picture;
The left margin of each face picture is moved to the left first distance, by the right margin of each face picture Move right the first distance;Wherein, the first distance is the product of the first ratio and two spacing;
The coboundary of each face picture is moved up into second distance, by the lower boundary of each face picture Move down the second distance;Wherein, the second distance is the product of the second ratio and two spacing.
To achieve the above object, this application provides a kind of face identification systems, comprising:
Module is obtained, detects each of described training set for obtaining training set, and using characteristic point regression model The characteristic point position of human face region in face picture;Wherein, the face picture is specially the face figure of marked recognition result Piece;
Expand circle module, for according to the characteristic point position using preset expansions circle it is tactful to each face picture into Row expands circle processing;
Training module, the face picture training deep learning model for being completed using circle is expanded, to be completed using training Deep learning model identify target face picture.
To achieve the above object, this application provides a kind of electronic equipment, comprising:
Memory, for storing computer program;
Processor is realized when for executing the computer program such as the step of above-mentioned face identification method.
To achieve the above object, this application provides a kind of computer readable storage medium, the computer-readable storages It is stored with computer program on medium, the step such as above-mentioned face identification method is realized when the computer program is executed by processor Suddenly.
By above scheme it is found that a kind of face identification method provided by the present application, comprising: obtain training set, and utilize Characteristic point regression model detects the characteristic point position of human face region in each of described training set face picture;Wherein, institute State the face picture that face picture is specially marked recognition result;Strategy is enclosed using preset expansion according to the characteristic point position Each face picture is carried out to expand circle processing;The face picture training deep learning model completed using circle is expanded, with convenience Target face picture is identified with the deep learning model that training is completed.
Face identification method provided by the present application, using preset expansion enclose strategy to each of training set face picture into Row expands circle processing, includes the additional informations such as hair style, ear nail due to expanding the face picture after circle, that is, the face picture after expanding circle can be with More feature is extracted, the accuracy for training obtained learning model to identify face picture based on this is higher.It can be seen that this The face identification method provided is provided, the accuracy of recognition of face is improved.Disclosed herein as well is a kind of face identification systems And a kind of electronic equipment and a kind of computer readable storage medium, equally it is able to achieve above-mentioned technical effect.
It should be understood that the above general description and the following detailed description are merely exemplary, this can not be limited Application.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.Attached drawing is and to constitute specification for providing further understanding of the disclosure A part, be used to explain the disclosure together with following specific embodiment, but do not constitute the limitation to the disclosure.Attached In figure:
Fig. 1 is a kind of flow chart of face identification method shown according to an exemplary embodiment;
Fig. 2 is the refined flow chart of step S102 in Fig. 1;
Fig. 3 is the flow chart of another face identification method shown according to an exemplary embodiment;
Fig. 4 is the structural schematic diagram of the convolution of bottleneck structure;
Fig. 5 is a kind of structure chart of face identification system shown according to an exemplary embodiment;
Fig. 6 is the structure chart according to a kind of electronic equipment shown in an exemplary embodiment.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall in the protection scope of this application.
The embodiment of the present application discloses a kind of face identification method, improves the accuracy of recognition of face.
Referring to Fig. 1, a kind of flow chart of face identification method shown according to an exemplary embodiment, as shown in Figure 1, packet It includes:
S101: training set is obtained, and is detected in each of training set face picture using characteristic point regression model The characteristic point position of human face region;Wherein, the face picture is specially the face picture of marked recognition result;
In this step, the training set for training deep learning model is obtained, the face picture in the training set is to make After being concentrated with mtcnn Face datection model cut data, the face picture of marked recognition result, recognition result herein is specific For gender, age recognition result etc., herein without specifically limiting.
After obtaining training set, the feature point of human face region in each face picture is detected using characteristic point regression model It sets, so that subsequent step carries out expansion ring manipulation according to this feature point position.Characteristic point regression model herein can be preferably 68 characteristic point regression model of dlib.
S102: strategy is enclosed using preset expansion according to the characteristic point position, each face picture expand at circle Reason;
In this step, strategy is enclosed to each face figure using preset expansion based on the characteristic point position that previous step obtains Piece.Specifically, the width of face picture can be expanded, it, can also be to the length of face picture to obtain the features such as human ear Degree is expanded, to obtain the features such as hair style, naturally it is also possible to while carrying out the expansion on width and length direction.It is enclosed according to expanding Face location afterwards takes off the human face data of corresponding region from original image, the face picture as subsequent training set.
It can also include to face figure as a preferred implementation manner, before carrying out expanding circle processing to face picture The step of piece is pre-processed, and standard faces picture is obtained.Pretreated concrete operations are not defined herein, to target person The step of face picture progress image preprocessing may include the size for adjusting the target face picture.For example, can be by face The same size of picture is 112 × 112.The step of carrying out image preprocessing target face picture also may include described in identification Position of human eye in target face picture corrects the target face picture according to the position of human eye, so that the target person Face in face picture is positive face.In specific implementation, according to the position of human eye in target face picture, by target face picture In face correct to eyes level, i.e. correction face is positive face.
It is described that strategy is enclosed to every using preset expansion according to the characteristic point position as a kind of preferred embodiment A face picture expand after circle processing, further includes: the size of each face picture of adjustment is target size.By In not being square mostly because of the face expanded after enclosing, if being normalized into pros according to the pressure of above-mentioned image pretreatment operation Shape has certain deformation, therefore face picture can be normalized to fixed size 256 × 192, and length-width ratio is about 4:3.
S103: deep learning model is trained using the face picture that circle is completed is expanded, so as to the depth completed using training It practises model and identifies target face picture.
This step is not defined specific deep learning model, for example, may include vgg, resnet101, Mobilefacenet etc..
The deep learning model for utilizing training to complete identifies target face picture specifically: when receiving the target face When picture, the characteristic point position of human face region in the target face picture is detected using the characteristic point regression model;According to The characteristic point position encloses strategy using the expansion and carries out expanding circle processing to the target face picture;By expanding, the image after enclosing is defeated Enter in the deep learning model that the training is completed, obtains the corresponding face recognition result of the original image.
Face identification method provided by the embodiments of the present application encloses strategy to each of training set face using preset expansion Picture carries out expanding circle processing, includes the additional informations such as hair style, ear nail due to expanding the face picture after enclosing, that is, expands the face figure after circle Piece can extract more feature, and the accuracy for training obtained learning model to identify face picture based on this is higher.Thus As it can be seen that face identification method provided by the embodiments of the present application, improves the accuracy of recognition of face.
A kind of specific expansion circle method is described below, as shown in Fig. 2, the step S102 in a upper embodiment may include:
S21: two spacing are determined according to the characteristic point position in each face picture;
S22: being moved to the left first distance for the left margin of each face picture, by the right side of each face picture Boundary moves right the first distance;Wherein, the first distance is the product of the first ratio and two spacing;
S23: moving up second distance for the coboundary of each face picture, will be under each face picture Boundary moves down the second distance;Wherein, the second distance is the product of the second ratio and two spacing.
In the present embodiment, using two spacing as the basis for expanding ring manipulation.Two are determined according to characteristic point position first Four boundaries of face picture are expanded certain area, i.e., by left margin and the right according to two interocular distances by spacing outward respectively Boundary expands first distance to from left to right respectively, and first distance is the product of the first ratio and two spacing, by coboundary and below Boundary expands downwards second distance upwards respectively, and second distance is the product of the second ratio and two spacing.Do not compare herein first Value and the second ratio are specifically limited, such as the first ratio can be 1/4, and the second ratio can be 1/2.Expansion ring gymnastics herein Make specifically: width expands 1/4 two spacing to from left to right respectively, and length expands downwards 1/2 two spacing respectively upwards.
The embodiment of the present application discloses a kind of face identification method, and relative to first embodiment, the present embodiment is to technical side Case has made further instruction and optimization.It is specific:
Referring to Fig. 3, the flow chart of another kind face identification method shown according to an exemplary embodiment, as shown in figure 3, Include:
S201: training set is obtained, and is detected in each of training set face picture using characteristic point regression model The characteristic point position of human face region;Wherein, the face picture is specially the face picture at marked gender and age;
In the present embodiment, the marked gender of face picture and age in training set, wherein gender label include male and Female, the Gender Classification of corresponding succeeding target learning model output are two classification.Age label may include 0 to 99 year old, be more than It can be considered as 99 years old more than 99 years old, the character classification by age of corresponding succeeding target learning model output is 100 classification.It certainly can be with Age label is set according to age stratification, for example, age label can be N-dimensional array, corresponding succeeding target learning model is defeated Character classification by age out is N classification, and the 1st data in label correspond to the age 0-9 years old, and the 2nd data correspond to the age 10-19 years old, The rest may be inferred, herein without specifically limiting.
S202: strategy is enclosed using preset expansion according to the characteristic point position, each face picture expand at circle Reason;
S203: mobilefacenet learning model is trained using the face picture that circle is completed is expanded, obtains the mesh of training completion Learning model is marked, to identify target face picture using the target learning model.
In specific implementation, the training set training mobilefacenet learning model obtained using previous step, is instructed Practice the target learning model completed.Mobilefacenet is current newer deep learning model, is applied primarily to face knowledge Other direction, application relevant for face have certain applicability.Age branch and gender branch are carried out in the training process Training simultaneously, obtains target learning model, realizes same learning model while exporting gender and character classification by age result.
Preferably, cross entropy loss function is used in the training process, due to the efficient net of mobilefacenet light weight Network design and loss function design, can meet accuracy and efficiency problem simultaneously.
In specific implementation, the mobilefacenet learning model includes for carrying out feature to input face picture The feature extraction layer of extraction;With batchnorm layers of the gender branch the corresponding first of feature extraction layer connection and age point Batchnorm layers of branch the corresponding 2nd;Connect entirely with the gender branch corresponding first of the described first batchnorm layers of connection Connect layer;The second full articulamentum corresponding with the age branch of the described 2nd batchnorm layers of connection.
The specific network structure of the feature extraction layer of mobilefacenet learning model is as shown in table 1:
Table 1
Wherein, Input represents the size and dimension of input feature vector, and Operator represents the operation of each step, and t is Parameter is used in bottleneck, c is the number of convolution kernel, that is, exports the port number of characteristic pattern, and n is that the operation of every a line is duplicate Number, s are the step-length that convolution or pondization operate.
Conv is convolution operation, and conv3 × 3 represent convolution kernel as 3 × 3 convolution operation, and depthwise is represented The convolution of depthwise type, bottleneck represent the convolution of bottleneck structure, and structure is as shown in Figure 4.GDConv (global depthwise convolution) is global separable convolution.If dimension h × w of input feature vector, entirely The convolution kernel size of the separable convolution of office is also h × w, and port number is intrinsic dimensionality.
Features described above extract layer carries out feature extraction to the input face picture using the separable convolution of the overall situation, using complete The separable convolution of office replaces global pool, retains as far as possible face characteristic information, improves the accuracy of feature extraction, Jin Erti The accuracy of high recognition of face.
The gender branch of mobilefacenet learning model are as follows: be connected with features described above extract layer 3 × 3 × 32, step-length For 1 convolutional layer, the bn being connected with the convolutional layer (batchnorm) layer, the full articulamentum being connected with the bn layers, the full articulamentum Output be two classification, i.e., the output of gender branch be two classification.
The age branch of mobilefacenet learning model are as follows: be connected with features described above extract layer 3 × 3 × 32, step-length For 1 convolutional layer, the bn being connected with the convolutional layer (batchnorm) layer, the full articulamentum being connected with the bn layers, herein not to defeated The particular number classified out is defined.
It can be seen that the present embodiment is by training the mobilefacenet learning model completed to the gender of face picture It is identified with the age.Mobilefacenet learning model is designed using the efficient network design of light weight and loss function, can be with Efficiency and accuracy in recognition of face are solved the problems, such as simultaneously.In addition, the training set that training utilizes includes while gender is marked With the face picture at age, the mobilefacenet learning model that training is completed can export the identification at gender and age simultaneously As a result, further improving the efficiency of recognition of face.
A kind of face identification system provided by the embodiments of the present application is introduced below, a kind of face described below is known Other system can be cross-referenced with a kind of above-described face identification method.
Referring to Fig. 5, a kind of structure chart of face identification system shown according to an exemplary embodiment, as shown in figure 5, packet It includes:
Module 501 is obtained, detects each of described training set for obtaining training set, and using characteristic point regression model The characteristic point position of human face region in face picture;Wherein, the face picture is specially the face of marked recognition result Picture;
Expand circle module 502, for enclosing strategy to each face figure using preset expansion according to the characteristic point position Piece carries out expanding circle processing;
Training module 503 has been trained for utilizing the face picture training deep learning model for expanding circle and completing to utilize At deep learning model identify target face picture.
Face identification system provided by the embodiments of the present application encloses strategy to each of training set face using preset expansion Picture carries out expanding circle processing, includes the additional informations such as hair style, ear nail due to expanding the face picture after enclosing, that is, expands the face figure after circle Piece can extract more feature, and the accuracy for training obtained learning model to identify face picture based on this is higher.Thus As it can be seen that face identification system provided by the embodiments of the present application, improves the accuracy of recognition of face.
On the basis of the above embodiments, the training module 503 includes: as a preferred implementation manner,
Training unit, the face picture training deep learning model for being completed using circle is expanded;
Detection unit, for detecting institute using the characteristic point regression model when receiving the target face picture State the characteristic point position of human face region in target face picture;
Expand coil unit, the target face picture is carried out for enclosing strategy using the expansion according to the characteristic point position Expand circle processing;
Input unit being inputted in the deep learning model that the training is completed for will expand the image after enclosing, being obtained described The corresponding face recognition result of original image.
On the basis of the above embodiments, as a preferred implementation manner, further include:
Module is adjusted, the size for adjusting each face picture is target size.
On the basis of the above embodiments, the face picture is specially marked property as a preferred implementation manner, Other and/or the age face picture.
On the basis of the above embodiments, the deep learning model includes as a preferred implementation manner, Mobilefacenet learning model.
On the basis of the above embodiments, the characteristic point regression model includes dlib as a preferred implementation manner, 68 characteristic point regression models.
On the basis of the above embodiments, the expansion circle module 502 includes: as a preferred implementation manner,
Determination unit, for determining two spacing according to the characteristic point position in each face picture;
First movement unit, for the left margin of each face picture to be moved to the left first distance, by each institute The right margin for stating face picture moves right the first distance;Wherein, the first distance is the first ratio and described two The product of spacing;
Second mobile unit, for the coboundary of each face picture to be moved up second distance, by each institute The lower boundary for stating face picture moves down the second distance;Wherein, the second distance is the second ratio and described two The product of spacing.
About the system in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, no detailed explanation will be given here.
Present invention also provides a kind of electronic equipment, referring to Fig. 6, a kind of electronic equipment 600 provided by the embodiments of the present application Structure chart, as shown in fig. 6, may include processor 11 and memory 12.The electronic equipment 600 can also include multimedia group Part 13, one or more of input/output (I/O) interface 14 and communication component 15.
Wherein, processor 11 is used to control the integrated operation of the electronic equipment 600, to complete above-mentioned face identification method In all or part of the steps.Memory 12 is used to store various types of data to support the operation in the electronic equipment 600, These data for example may include the instruction of any application or method for operating on the electronic equipment 600, and The relevant data of application program, such as contact data, the message of transmitting-receiving, picture, audio, video etc..The memory 12 can By any kind of volatibility or non-volatile memory device or their combination realization, such as static random access memory Device (Static Random Access Memory, abbreviation SRAM), electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, abbreviation EEPROM), erasable programmable Read-only memory (Erasable Programmable Read-Only Memory, abbreviation EPROM), programmable read only memory (Programmable Read-Only Memory, abbreviation PROM), and read-only memory (Read-Only Memory, referred to as ROM), magnetic memory, flash memory, disk or CD.Multimedia component 13 may include screen and audio component.Wherein shield Curtain for example can be touch screen, and audio component is used for output and/or input audio signal.For example, audio component may include one A microphone, microphone is for receiving external audio signal.The received audio signal can be further stored in memory It 12 or is sent by communication component 15.Audio component further includes at least one loudspeaker, is used for output audio signal.I/O interface 14 provide interface between processor 11 and other interface modules, other above-mentioned interface modules can be keyboard, mouse, button Deng.These buttons can be virtual push button or entity button.Communication component 15 for the electronic equipment 600 and other equipment it Between carry out wired or wireless communication.Wireless communication, such as Wi-Fi, bluetooth, near-field communication (Near Field Communication, abbreviation NFC), 2G, 3G or 4G or they one or more of combination, therefore corresponding communication Component 15 may include: Wi-Fi module, bluetooth module, NFC module.
In one exemplary embodiment, electronic equipment 600 can be by one or more application specific integrated circuit (Application Specific Integrated Circuit, abbreviation ASIC), digital signal processor (Digital Signal Processor, abbreviation DSP), digital signal processing appts (Digital Signal Processing Device, Abbreviation DSPD), programmable logic device (Programmable Logic Device, abbreviation PLD), field programmable gate array (Field Programmable Gate Array, abbreviation FPGA), controller, microcontroller, microprocessor or other electronics member Part is realized, for executing above-mentioned face identification method.
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction is additionally provided, it should The step of above-mentioned face identification method is realized when program instruction is executed by processor.For example, the computer readable storage medium can Think the above-mentioned memory 12 including program instruction, above procedure instruction can be executed by the processor 11 of electronic equipment 600 with complete At above-mentioned face identification method.
Each embodiment is described in a progressive manner in specification, the highlights of each of the examples are with other realities The difference of example is applied, the same or similar parts in each embodiment may refer to each other.For system disclosed in embodiment Speech, since it is corresponded to the methods disclosed in the examples, so being described relatively simple, related place is referring to method part illustration ?.It should be pointed out that for those skilled in the art, under the premise of not departing from the application principle, also Can to the application, some improvement and modification can also be carried out, these improvement and modification also fall into the protection scope of the claim of this application It is interior.
It should also be noted that, in the present specification, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes that A little elements, but also including other elements that are not explicitly listed, or further include for this process, method, article or The intrinsic element of equipment.In the absence of more restrictions, the element limited by sentence "including a ...", is not arranged Except there is also other identical elements in the process, method, article or apparatus that includes the element.

Claims (10)

1. a kind of face identification method characterized by comprising
Training set is obtained, and detects human face region in each of described training set face picture using characteristic point regression model Characteristic point position;Wherein, the face picture is specially the face picture of marked recognition result;
Strategy is enclosed using preset expansion according to the characteristic point position each face picture is carried out to expand circle processing;
The face picture training deep learning model completed using circle is expanded, so that the deep learning model completed using training is identified Target face picture.
2. face identification method according to claim 1, which is characterized in that the deep learning model completed using training Identify target face picture, comprising:
When receiving the target face picture, people in the target face picture is detected using the characteristic point regression model The characteristic point position in face region;
Strategy is enclosed using the expansion according to the characteristic point position target face picture is carried out to expand circle processing;
The image after enclosing will be expanded to input in the deep learning model that the training is completed, obtain the corresponding face of the original image Recognition result.
3. face identification method according to claim 1, which is characterized in that described utilized according to the characteristic point position is preset Expansion circle strategy to each face picture carry out expand circle processing after, further includes:
The size for adjusting each face picture is target size.
4. face identification method according to claim 1, which is characterized in that the face picture is specially marked gender And/or the face picture at age.
5. face identification method according to claim 1, which is characterized in that the deep learning model includes Mobilefacenet learning model.
6. face identification method according to claim 1, which is characterized in that the characteristic point regression model includes dlib68 spy Sign point regression model.
7. according to claim 1 to face identification method described in any one of 6, which is characterized in that described according to the characteristic point Position encloses strategy using preset expansion and carries out expanding circle processing to each face picture, comprising:
Two spacing are determined according to the characteristic point position in each face picture;
The left margin of each face picture is moved to the left first distance, to the right by the right margin of each face picture The mobile first distance;Wherein, the first distance is the product of the first ratio and two spacing;
The coboundary of each face picture is moved up into second distance, the lower boundary of each face picture is downward The mobile second distance;Wherein, the second distance is the product of the second ratio and two spacing.
8. a kind of face identification system characterized by comprising
Module is obtained, detects each of training set face figure for obtaining training set, and using characteristic point regression model The characteristic point position of human face region in piece;Wherein, the face picture is specially the face picture of marked recognition result;
Expand circle module, each face picture is expanded for enclosing strategy using preset expansion according to the characteristic point position Circle processing;
Training module, the face picture training deep learning model for being completed using circle is expanded, so as to the depth completed using training It spends learning model and identifies target face picture.
9. a kind of electronic equipment characterized by comprising
Memory, for storing computer program;
Processor, realizing the face identification method as described in any one of claim 1 to 7 when for executing the computer program Step.
10. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium Program realizes the step of the face identification method as described in any one of claim 1 to 7 when the computer program is executed by processor Suddenly.
CN201910329195.7A 2019-04-23 2019-04-23 A kind of face identification method, system and electronic equipment and storage medium Pending CN110059645A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910329195.7A CN110059645A (en) 2019-04-23 2019-04-23 A kind of face identification method, system and electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910329195.7A CN110059645A (en) 2019-04-23 2019-04-23 A kind of face identification method, system and electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110059645A true CN110059645A (en) 2019-07-26

Family

ID=67320176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910329195.7A Pending CN110059645A (en) 2019-04-23 2019-04-23 A kind of face identification method, system and electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110059645A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291749A (en) * 2020-01-20 2020-06-16 深圳市优必选科技股份有限公司 Gesture recognition method and device and robot
CN112784680A (en) * 2020-12-23 2021-05-11 中国人民大学 Method and system for locking dense contacts in crowded place

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503669A (en) * 2016-11-02 2017-03-15 重庆中科云丛科技有限公司 A kind of based on the training of multitask deep learning network, recognition methods and system
CN107609459A (en) * 2016-12-15 2018-01-19 平安科技(深圳)有限公司 A kind of face identification method and device based on deep learning
CN108288280A (en) * 2017-12-28 2018-07-17 杭州宇泛智能科技有限公司 Dynamic human face recognition methods based on video flowing and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503669A (en) * 2016-11-02 2017-03-15 重庆中科云丛科技有限公司 A kind of based on the training of multitask deep learning network, recognition methods and system
CN107609459A (en) * 2016-12-15 2018-01-19 平安科技(深圳)有限公司 A kind of face identification method and device based on deep learning
CN108288280A (en) * 2017-12-28 2018-07-17 杭州宇泛智能科技有限公司 Dynamic human face recognition methods based on video flowing and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291749A (en) * 2020-01-20 2020-06-16 深圳市优必选科技股份有限公司 Gesture recognition method and device and robot
CN111291749B (en) * 2020-01-20 2024-04-23 深圳市优必选科技股份有限公司 Gesture recognition method and device and robot
CN112784680A (en) * 2020-12-23 2021-05-11 中国人民大学 Method and system for locking dense contacts in crowded place
CN112784680B (en) * 2020-12-23 2024-02-02 中国人民大学 Method and system for locking dense contactors in people stream dense places

Similar Documents

Publication Publication Date Title
CN110033332A (en) A kind of face identification method, system and electronic equipment and storage medium
CN108875523B (en) Human body joint point detection method, device, system and storage medium
CN104350509B (en) Quick attitude detector
WO2018028546A1 (en) Key point positioning method, terminal, and computer storage medium
CN110084174A (en) A kind of face identification method, system and electronic equipment and storage medium
CN109145766A (en) Model training method, device, recognition methods, electronic equipment and storage medium
EP3523754A1 (en) Face liveness detection method and apparatus, and electronic device
CN110309706A (en) Face critical point detection method, apparatus, computer equipment and storage medium
US9076221B2 (en) Removing an object from an image
WO2019024717A1 (en) Anti-counterfeiting processing method and related product
US10326928B2 (en) Image processing apparatus for determining whether section of target area matches section of person area and control method thereof
EP3647992A1 (en) Face image processing method and apparatus, storage medium, and electronic device
CN110046941A (en) A kind of face identification method, system and electronic equipment and storage medium
CN109815843A (en) Object detection method and Related product
CN106295533A (en) Optimization method, device and the camera terminal of a kind of image of autodyning
CN110135449A (en) Use the manufacture Parts Recognition of computer vision and machine learning
WO2021169257A1 (en) Face recognition
CN104077597B (en) Image classification method and device
EP4030749A1 (en) Image photographing method and apparatus
EP4287068A1 (en) Model training method, scene recognition method, and related device
CN109840885B (en) Image fusion method and related product
CN106096043A (en) A kind of photographic method and mobile terminal
CN107220614A (en) Image-recognizing method, device and computer-readable recording medium
CN110059645A (en) A kind of face identification method, system and electronic equipment and storage medium
CN105979358A (en) Volume adjusting method and apparatus and smart terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination