CN108268822A - Face identification method, device and robot - Google Patents

Face identification method, device and robot Download PDF

Info

Publication number
CN108268822A
CN108268822A CN201611263999.4A CN201611263999A CN108268822A CN 108268822 A CN108268822 A CN 108268822A CN 201611263999 A CN201611263999 A CN 201611263999A CN 108268822 A CN108268822 A CN 108268822A
Authority
CN
China
Prior art keywords
face
image
recognition
pending
full articulamentum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611263999.4A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Institute of artificial intelligence
Original Assignee
Kuang Chi Innovative Technology Ltd
Shenzhen Guangqi Hezhong Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kuang Chi Innovative Technology Ltd, Shenzhen Guangqi Hezhong Technology Co Ltd filed Critical Kuang Chi Innovative Technology Ltd
Priority to CN201611263999.4A priority Critical patent/CN108268822A/en
Publication of CN108268822A publication Critical patent/CN108268822A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention discloses a kind of face identification method, device and robot, this method includes:Extract the characteristics of image in pending recognition of face image;Based on the characteristics of image in pending recognition of face image, whether pending recognition of face image is detected comprising human face region, obtains first as a result, wherein, whether the first result is used to indicate in pending recognition of face image comprising human face region;Based on the characteristics of image in pending recognition of face image, while whether pending recognition of face image is detected comprising human face region, the face region of pending recognition of face image is demarcated, obtains the second result, wherein, the second result is used to demarcate the face region in pending recognition of face image;First result and the second result are subjected to integration processing, obtain the recognition result of pending recognition of face image.The present invention is solved carries out the technical issues of recognition of face program is slow using convolutional neural networks.

Description

Face identification method, device and robot
Technical field
The present invention relates to field of face identification, in particular to a kind of face identification method, device and robot.
Background technology
In the prior art, using the convolutional neural networks of depth, the work of Face datection and face calibration is carried out generally all It is carried out separately.During the work time, the convolutional neural networks of two different depth of generally use, the volume of first depth Product neural network detection human face region, then carries out cutting processing by the human face region detected, recycles another depth Convolutional neural networks carry out face calibration.During Face datection is carried out, convolutional neural networks can be from image learning To the feature of human face region, a separator is then recycled, distinguishes human face region and non-face region, so as to reach face inspection The purpose of survey;Equally, face calibration is also to cut out in the human face region come, utilizes convolutional neural networks, study face area The feature of face and position in domain achieve the purpose that calibration face, and the image of human face region needs to carry out twice in the process Convolutional Neural extracts feature, and provincial characteristics is repeated study and calculating, the delay time for the calculating deepened significantly.
The problem of recognition of face program is slow is carried out using convolutional neural networks for above-mentioned, not yet proposes effective solution at present Certainly scheme.
Invention content
An embodiment of the present invention provides a kind of face identification method, device and robot, at least to solve to utilize convolution god The technical issues of recognition of face program is slow is carried out through network.
One side according to embodiments of the present invention provides a kind of face identification method, including:Extract pending face Identify the characteristics of image in image;Based on the characteristics of image in the pending recognition of face image, the pending people is detected Whether face identification image obtains first as a result, wherein, first result be used to indicate the pending people comprising human face region Whether human face region is included in face identification image;Based on the characteristics of image in the pending recognition of face image, in detection institute While pending recognition of face image is stated whether comprising human face region, the face area of the pending recognition of face image is demarcated Domain obtains second as a result, wherein, second result is used to demarcate the face region in the pending recognition of face image; First result and second result are subjected to integration processing, obtain the identification knot of the pending recognition of face image Fruit.
Further, before the characteristics of image in extracting pending recognition of face image, the method further includes:It establishes Convolutional neural networks, wherein, the convolutional neural networks include the first full articulamentum and the second full articulamentum;Treated described in detection into Whether row recognition of face image includes comprising human face region:The pending recognition of face is detected using the described first full articulamentum Whether image obtains first result comprising human face region;Demarcate the face region packet of the pending recognition of face image It includes:The face region of the pending recognition of face image is demarcated using the described second full articulamentum, obtains second result.
Further, record has face characteristic in the described first full articulamentum, and institute is detected using the described first full articulamentum State whether pending recognition of face image includes comprising human face region:Detect the characteristics of image of the pending recognition of face image In whether include the face characteristic;If it is detected that the face is included in the characteristics of image of the pending recognition of face image Feature, it is determined that the pending recognition of face image includes the human face region;If it is detected that the pending recognition of face The face characteristic is not included in the characteristics of image of image, it is determined that the pending recognition of face image does not include the face Region.
Further, record has grader in the described second full articulamentum, and being recorded in the grader has face feature, profit The face region that the pending recognition of face image is demarcated with the described second full articulamentum includes:It, will by the grader Characteristics of image in the pending recognition of face image is included into classification corresponding with the face feature respectively, wherein, each Classification corresponds to an organic region in the face region.
Further, convolutional neural networks are established to include:First full articulamentum and the second full articulamentum are set;Obtain face Feature and face feature;The described first full articulamentum is trained using the face characteristic, obtains first-loss value, and make The described second full articulamentum is trained with the face feature, obtains the second penalty values;By the first-loss value and institute The superposition of the second penalty values is stated, obtains the penalty values of the convolutional neural networks;Using the penalty values to the convolutional Neural net Parameter in network is trained, the convolutional neural networks after being trained, wherein, the parameter includes the described first full articulamentum With each coefficient in the cost function of the described second full articulamentum.
Other side according to embodiments of the present invention provides a kind of face identification device, including:Extraction unit is used Characteristics of image in the pending recognition of face image of extraction;Detection unit, for being based on the pending recognition of face image In characteristics of image, whether detect the pending recognition of face image comprising human face region, obtain first as a result, wherein, institute The first result is stated whether to be used to indicate in the pending recognition of face image comprising human face region;Unit is demarcated, for being based on Characteristics of image in the pending recognition of face image is detecting whether the pending recognition of face image includes face area While domain, the face region of the pending recognition of face image is demarcated, obtains second as a result, wherein, second result For demarcating the face region in the pending recognition of face image;Integral unit, by first result and described second As a result integration processing is carried out, obtains the recognition result of the pending recognition of face image.
Further, described device further includes:Creation module, for establishing convolutional neural networks, wherein, the convolution god Include the first full articulamentum and the second full articulamentum through network;The detection unit includes:Detection module, for utilizing described the Whether one full articulamentum detects the pending recognition of face image comprising human face region, obtains first result;The mark Order member includes:Demarcating module, for demarcating the face of the pending recognition of face image using the described second full articulamentum Region obtains second result.
Further, record has face characteristic, the detection module in the first full articulamentum that the creation module is established Including:Detection sub-module, it is whether special comprising the face in the characteristics of image of the pending recognition of face image for detecting Sign;First determining module, it is special for including the face in the characteristics of image for detecting the pending recognition of face image In the case of sign, determine that the pending recognition of face image includes the human face region;Second determining module, for detecting Go out in the characteristics of image of the pending recognition of face image in the case of not including the face characteristic, determine described pending Recognition of face image does not include the human face region.
Further, record has grader in the second full articulamentum that the creation module is established, and remembers in the grader Record has face feature, and the demarcating module includes:Module is included into, for passing through the grader, the pending face is known Characteristics of image in other image is included into classification corresponding with the face feature respectively, wherein, each classification corresponds to the face An organic region in region.
Further, the creation module includes:Setup module, for setting the first full articulamentum and the second full connection Layer;Acquisition module, for obtaining face characteristic and face feature;First training module, for using the face characteristic to institute State the first full articulamentum to be trained, obtain first-loss value, and using the face feature to the described second full articulamentum into Row training, obtains the second penalty values;Laminating module for the first-loss value and second penalty values to be superimposed, obtains The penalty values of the convolutional neural networks;Second training module, for utilizing the penalty values in the convolutional neural networks Parameter be trained, the convolutional neural networks after being trained, wherein, the parameter include the described first full articulamentum and institute State each coefficient in the cost function of the second full articulamentum.
Another aspect according to embodiments of the present invention provides a kind of recognition of face machine people, including:Image collector It puts, for acquiring pending recognition of face image;Face identification device.
In embodiments of the present invention, a feature extraction is carried out to pending recognition of face image, and is treated according to extraction The characteristics of image for carrying out recognition of face image carries out the face calibration of recognition of face and face respectively, wherein, by detection treat into Whether row recognition of face image obtains first result comprising human face region;By demarcating people in pending recognition of face image The face region of face obtains second as a result, integrated to the first result and the second result that obtain, obtains pending face and knows The recognition result of other image.Recognition of face and people are carried out by carrying out an image characteristics extraction to pending recognition of face image The face calibration of face, without carrying out multiple image characteristics extraction to pending recognition of face image, can accelerate to pending Recognition of face image carries out the speed of recognition of face and the face calibration of face, solves and carries out face using convolutional neural networks The technical issues of recognizer is slow.
Description of the drawings
Attached drawing described herein is used to provide further understanding of the present invention, and forms the part of the application, this hair Bright illustrative embodiments and their description do not constitute improper limitations of the present invention for explaining the present invention.In the accompanying drawings:
Fig. 1 is a kind of flow chart of face identification method according to embodiments of the present invention;
Fig. 2 is a kind of schematic diagram of face identification device according to embodiments of the present invention;
Fig. 3 is the schematic diagram of recognition of face machine people according to embodiments of the present invention a kind of.
Specific embodiment
In order to which those skilled in the art is made to more fully understand the present invention program, below in conjunction in the embodiment of the present invention The technical solution in the embodiment of the present invention is clearly and completely described in attached drawing, it is clear that described embodiment is only The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people Member's all other embodiments obtained without making creative work should all belong to the model that the present invention protects It encloses.
It should be noted that term " first " in description and claims of this specification and above-mentioned attached drawing, " Two " etc. be the object for distinguishing similar, and specific sequence or precedence are described without being used for.It should be appreciated that it uses in this way Data can be interchanged in the appropriate case, so as to the embodiment of the present invention described herein can in addition to illustrating herein or Sequence other than those of description is implemented.In addition, term " comprising " and " having " and their any deformation, it is intended that cover Cover it is non-exclusive include, be not necessarily limited to for example, containing the process of series of steps or unit, method, system, product or equipment Those steps or unit clearly listed, but may include not listing clearly or for these processes, method, product Or the intrinsic other steps of equipment or unit.
According to embodiments of the present invention, a kind of face identification method embodiment is provided, it should be noted that in the stream of attached drawing The step of journey illustrates can perform in the computer system of such as a group of computer-executable instructions, although also, flowing Logical order is shown in journey figure, but in some cases, can be performed with the sequence being different from herein shown or described by The step of.
Fig. 1 is a kind of flow chart of face identification method according to embodiments of the present invention, as shown in Figure 1, this method includes Following steps:
Step S102 extracts the characteristics of image in pending recognition of face image;
Step S104, based on the characteristics of image in pending recognition of face image, detecting pending recognition of face image is It is no to include human face region, first is obtained as a result, wherein, the first result is used to indicate in pending recognition of face image and whether includes Human face region;
Step S106 based on the characteristics of image in pending recognition of face image, is detecting pending recognition of face image While whether including human face region, the face region of pending recognition of face image is demarcated, obtains second as a result, wherein, the Two results are used to demarcate the face region in pending recognition of face image;
First result and the second result are carried out integration processing, obtain the knowledge of pending recognition of face image by step S108 Other result.
In the above-described embodiments, to the pending recognition of face image for carrying out recognition of face is needed to carry out a characteristics of image It extracts, and carries out the face mark of recognition of face and face respectively according to the characteristics of image of the pending recognition of face image of extraction It is fixed, wherein, by whether detecting pending recognition of face image comprising human face region, obtain the first result;It is pending detecting While whether recognition of face image includes human face region, by the face area that face is demarcated in pending recognition of face image Domain obtains second as a result, integrated to the first result and the second result that obtain, obtains the knowledge of pending recognition of face image Other result.The face mark for carrying out recognition of face and face by carrying out an image characteristics extraction to pending recognition of face image It is fixed, without carrying out multiple image characteristics extraction to pending recognition of face image, can accelerate to pending recognition of face figure As the speed of face calibration for carrying out recognition of face and face, solve slow using convolutional neural networks progress recognition of face program The technical issues of.
Optionally, pending recognition of face image can be the image acquired in advance, extract and scheme in the image acquired in advance As feature, and according to the pending recognition of face image of Image Feature Detection of extraction whether comprising human face region and calibration treat into The face region of row recognition of face image.
Optionally, pending recognition of face image can be in the case where needing to carry out recognition of face and face calibration in real time Image in the current detection environment of acquisition, the image for obtaining environment to be detected in real time by picture pick-up device are used as pending face Identify image, wherein, picture pick-up device obtain pending recognition of face image can be intercepted according to Fixed Time Interval it is more The sequencing of multiple times according to interception of interception is arranged in order by a image, and successively will interception according to putting in order Image as pending recognition of face image, detect whether pending recognition of face image is treated comprising human face region and calibration Carry out the face region of recognition of face image, if detection and calibration failure, according to put in order to the image of next cis-position into Row identification and calibration;If detecting and demarcating successfully, stopping is identified and demarcates to the subsequent image that this puts in order, In, detection and calibration unsuccessfully refer to detect pending recognition of face image not comprising human face region or can not demarcate pending people The face region of face identification image or the pending recognition of face image of detection do not include human face region, and can not demarcate yet and treat Carry out the face region of recognition of face image;It detects and demarcates and successfully refer to that detecting pending recognition of face image includes face area Domain, and successfully demarcate the face region of pending recognition of face image.
An optional embodiment, before the characteristics of image in extracting pending recognition of face image, method further includes: Convolutional neural networks are established, wherein, convolutional neural networks include the first full articulamentum and the second full articulamentum;Detect pending people Whether face identification image includes comprising human face region:Detect whether pending recognition of face image includes using the first full articulamentum Human face region obtains the first result;The face region for demarcating pending recognition of face image includes:Utilize the second full articulamentum mark The face region of fixed pending recognition of face image, obtains the second result.
Specifically, it before the characteristics of image in extracting pending recognition of face image, needs first to establish convolutional Neural net Network, then by the convolutional neural networks, the characteristics of image in the pending recognition of face image of extraction carries out recognition of face It is demarcated with face, wherein, the neural network of the convolution of foundation includes the first full articulamentum and the second full articulamentum, complete using first Articulamentum detects whether pending recognition of face image includes human face region;It is and pending being detected by the first full articulamentum While whether recognition of face image is human face region, the face of pending recognition of face image is demarcated using the second full articulamentum Face region, in addition, the operation due to convolutional neural networks can be run in computer program, it is complete by convolutional neural networks Into the recognition of face of pending recognition of face image and face calibration can be run in computer environment, and for people First full articulamentum of face identification and the second full articulamentum for face to be demarcated are carried out at the same time, and can accelerate convolutional neural networks The speed demarcated to recognition of face and face.
As an optional embodiment, record has face characteristic in the first full articulamentum, is examined using the first full articulamentum Survey whether pending recognition of face image includes comprising human face region:Detecting in the characteristics of image of pending recognition of face image is It is no to include face characteristic;If it is detected that in the characteristics of image of pending recognition of face image include face characteristic, it is determined that treat into Row recognition of face image includes human face region;If it is detected that it is special not include face in the characteristics of image of pending recognition of face image Sign, it is determined that pending recognition of face image does not include human face region.
Specifically, the information for having face characteristic is recorded in the first full articulamentum of convolutional neural networks, it is complete using first Whether articulamentum detects pending recognition of face image can be by detecting pending recognition of face image comprising human face region Whether realized in characteristics of image comprising face characteristic, if it is detected that including people in the characteristics of image of pending recognition of face image Face feature, it is determined that pending recognition of face image includes human face region;If it is detected that the image of pending recognition of face image Face characteristic is not included in feature, it is determined that pending recognition of face image does not include human face region, by will entirely connect first It connects the face characteristic information that is recorded in layer and whether detects the characteristics of image of pending recognition of face image comprising face characteristic, it can be with Whether human face region is included in the pending recognition of face image of accurate judgement.
It is alternatively possible to by the image in the face characteristic recorded in the first full articulamentum and pending recognition of face image Feature compares, if the characteristics of image in pending recognition of face image and the face characteristic phase recorded in the first full articulamentum Together, it is determined that pending recognition of face image includes human face region, if characteristics of image in pending recognition of face image and the The face characteristic recorded in one full articulamentum differs completely, it is determined that pending recognition of face image does not include human face region.
An optional embodiment, record has a grader in the second full articulamentum, and record has face feature in grader, profit The face region that pending recognition of face image is demarcated with the second full articulamentum includes:By grader, pending face is known Characteristics of image in other image is included into classification corresponding with face feature respectively, wherein, each classification is corresponded in face region One organic region.
Specifically, record has grader, and recorded in grader in the second full articulamentum of convolutional neural networks There is face feature, wherein, the face recorded in separator are characterized in the corresponding organ characteristic of each organic region in face region. For example, face can be divided into five organic regions, i.e. brow region, eye according to eyebrow, eyes, ear, nose and face Feature in brow region is recorded as eyebrow organ by eyeball region, ear region, nasal area and face region, grader Feature;Feature in eye areas is recorded as eyes organ characteristic;Feature in nasal area is recorded as nose organ spy Sign;Feature in face region is recorded as face organ characteristic.Grader is corresponded to according to organic region each in face region Organ characteristic, the characteristics of image in pending recognition of face image is classified, it is special that characteristics of image is included into each organ Corresponding classification is levied, wherein, each classification corresponds to an organic region in face region, and convolutional Neural can be according to each class Characteristics of image in not demarcates pending recognition of face image, so as to complete the face to pending recognition of face image The calibration in region.
Optionally, it can also include being not belonging to the classification of face feature in grader, if in pending recognition of face image Characteristics of image be not belonging to face feature, then this feature is divided into the classification for being not belonging to face feature, and not in the category Characteristics of image demarcated.
In an optional embodiment, establish convolutional neural networks and include:The first full articulamentum and second is set to connect entirely Connect layer;Obtain face characteristic and face feature;Using face characteristic, to first, full articulamentum is trained, and obtains first-loss Value, and using face feature, to second, full articulamentum is trained, and obtains the second penalty values;First-loss value and second are lost Value superposition, obtains the penalty values of convolutional neural networks;The parameter in convolutional neural networks is trained using penalty values, is obtained Convolutional neural networks after training, wherein, parameter includes each in the cost function of the first full articulamentum and the second full articulamentum A coefficient.
Specifically, the process of convolutional neural networks is established, the connection entirely of setting first in convolutional neural networks can be included in Layer and the second full articulamentum, and face characteristic and face feature are obtained, by face characteristic, to first, full articulamentum is trained, Using trained result as first-loss value;It is complete to second by face feature while the first full articulamentum is trained Articulamentum is trained, using trained result as the second penalty values, the first-loss value and the second penalty values that training is obtained Superposition, the result after superposition is the penalty values of convolutional neural networks, using the penalty values to the parameter in convolutional neural networks It is trained, the convolutional neural networks after training is obtained, wherein the parameter being trained can be the first full articulamentum and second Coefficient in the cost function of full articulamentum.
An optional embodiment, the recognition result of pending recognition of face image include:The instruction of first result is pending In recognition of face image the face region in pending recognition of face image is demarcated comprising human face region, the second result;Or first As a result it indicates not including human face region in pending recognition of face image, the second result is not demarcated in pending recognition of face image Face region.
It should be noted that cost function is also known as performance indicator, cost function, it is that regulation optimizes target to be achieved Mathematic(al) representation, usually state variable, the scalar function for controlling variable, performance variable.
Optionally, training after convolutional neural networks be used directly for recognition of face and face calibration, by Be implanted into computer the convolutional neural networks can be completed the recognition of face to pending recognition of face image and face calibration Work.
It, can be by convolutional neural networks to the image in pending recognition of face image as an optional embodiment Feature extraction carries out recognition of face and calibration face further according to the characteristics of image of extraction.Concrete scheme is as follows:
A convolutional neural networks are built, wherein, multiple convolutional layers are included in the convolutional neural networks (convolutional layer), multiple pond layers (pooling layer) and full articulamentum, it is defeated in first convolutional layer Enter the entire image to be identified of extraction and extract the characteristics of image in image.
A pond layer is connected after each convolutional layer and one is corrected linear unit ReLU (Rectified Linear Units), wherein, pond layer is used to the characteristics of image that convolutional layer before obtains doing an aggregate statistics, can be by multiple numerical value A numerical value is mapped as, achievees the purpose that reduce data volume;Linear unit is corrected, is mainly used for making the mapping data of convolutional layer to use up Coefficient of discharge arranges, and is vision reflection of the processing to image closer to people, reaches better handling result.
Two full articulamentums point are set after the full articulamentum for handling and extracting the parts such as the convolutional layer of characteristics of image Branch, the first branch is the first full articulamentum, and second point of branch is the second full articulamentum, wherein, the first branch is by two Grader Softmax is accessed after a full connection, whether is human face region by the detection of classifier image;Second branch is also the same By two full connections, 10 dimensional vectors are then exported, then calculate the using Euclid's loss layer EuclideanLoss The distance of 10 dimensional vectors of five coordinate compositions of 10 dimensional vectors of two branches output and human face five-sense-organ central point.
Second is obtained using two vectorial distances that the second branch obtains as cost function during convolution god's network training Second penalty values of branch;In addition, generation of first branch by the use of grader loss layer SoftmaxWithLoss as Face datection Valency function obtains the first-loss value of Face datection, is then superimposed first-loss value and the second penalty values, obtains entire convolution The penalty values of neural network recycle this penalty values to be trained network parameters, ultimately form trained treat Carry out the exclusive convolutional Neural net of recognition of face image.
In entire convolutional neural networks, the obtained data of the last one convolutional layer for carrying out image characteristics extraction will be by The full articulamentum of Liang Ge branches uses, and during network training, error back propagation can be by the error of Liang Ge branches simultaneously It is propagated through and.It, can be by the last one above-mentioned convolution in order to avoid interfering with each other for Liang Ge branches caused by error back propagation Two parts of layer copy is denoted as the copy data copy2 of the first copy data copy1 and second respectively, wherein, the first copy data are by the One branch calls, and the second copy data are called by the second branch, and in error back propagation, the first of the first copy data compare Value is calculated by the first branch, and the second fiducial value of the second copy data is calculated by the second branch, is respectively obtaining the first copy After the fiducial value of data and the second copy data, two the first fiducial values are added with the second fiducial value obtain it is above-mentioned last The fiducial value of a convolutional layer reaches entire convolutional neural networks training continuing travels back by the last one above-mentioned convolutional layer Purpose.
After convolutional neural networks train, for a width picture by the calculating of a convolutional neural networks, by two A branch judges whether the picture is human face region respectively, while the coordinate composition of face point in human face region can also be exported Vector achievees the purpose that be carried out at the same time Face datection and the calibration of face point.
Entire convolutional neural networks calculate picture just with a convolutional neural networks, it is possible to reduce by multiple convolution The problem of picture region characteristics of image generated during neural computing repeats to extract and compute repeatedly greatly reduces convolution god Run time through network, improves calculating speed.
Fig. 2 is a kind of schematic diagram of face identification device according to embodiments of the present invention, as shown in Fig. 2, the device includes: Extraction unit 21, for extracting the characteristics of image in pending recognition of face image;Detection unit 23, for being based on pending people Whether characteristics of image in face identification image, detect pending recognition of face image comprising human face region, obtain first as a result, its In, whether the first result is used to indicate in pending recognition of face image comprising human face region;Unit 25 is demarcated, for being based on treating The characteristics of image in recognition of face image is carried out, while whether pending recognition of face image is detected comprising human face region, The face region of pending recognition of face image is demarcated, obtains second as a result, wherein, the second result is used to demarcate pending face Identify the face region in image;First result and the second result are carried out integration processing, obtain pending people by integral unit 27 Face identifies the recognition result of image.
In the above-described embodiments, by extraction unit, to need to carry out the pending recognition of face image of recognition of face into Feature extraction of row, and pass through detection unit and calibration unit according to the feature of the pending recognition of face image of extraction difference Recognition of face and the face calibration of face are carried out, wherein, by detection unit, detect whether pending recognition of face image includes Human face region obtains the first result;By demarcating unit, the face region of face is demarcated in pending recognition of face image, Second is obtained as a result, again integrate the first result obtained and the second result by integral unit, obtains pending face Identify the recognition result of image.Recognition of face and face are carried out by carrying out a feature extraction to pending recognition of face image Face calibration, without carrying out multiple feature extraction to pending recognition of face image, can accelerate to know pending face Other image carries out the speed of recognition of face and the face calibration of face, solves and carries out recognition of face journey using convolutional neural networks The technical issues of sequence is slow.
As an optional embodiment, device further includes:Creation module, for establishing convolutional neural networks, wherein, volume Product neural network includes the first full articulamentum and the second full articulamentum;Detection unit includes:Detection module, for complete using first Whether articulamentum detects pending recognition of face image comprising human face region, obtains the first result;Calibration unit includes:Calibration mold Block for demarcating the face region of pending recognition of face image using the second full articulamentum, obtains the second result.
In an optional embodiment, record has face characteristic in the first full articulamentum that creation module is established, and detects Module includes:Detection sub-module, for whether detecting in the characteristics of image of pending recognition of face image comprising face characteristic;The One determining module, in the case of including face characteristic in the characteristics of image for detecting pending recognition of face image, really Fixed pending recognition of face image includes human face region;Second determining module, for detecting pending recognition of face image Characteristics of image in do not include face characteristic in the case of, determine pending recognition of face image do not include human face region.
An optional embodiment, creation module establish the second full articulamentum in record have grader, remember in grader Record has face feature, and demarcating module includes:Module is included into, for passing through grader, by the figure in pending recognition of face image As feature is included into classification corresponding with face feature respectively, wherein, each classification corresponds to an organic region in face region.
As an optional embodiment, creation module includes:Setup module, for setting the first full articulamentum and second Full articulamentum;Acquisition module, for obtaining face characteristic and face feature;First training module, for using face characteristic pair First full articulamentum is trained, and obtains first-loss value, and full articulamentum is trained to second using face feature, is obtained Second penalty values;Laminating module for first-loss value and the second penalty values to be superimposed, obtains the loss of convolutional neural networks Value;Second training module, for being trained using penalty values to the parameter in convolutional neural networks, the convolution after being trained Neural network, wherein, parameter includes each coefficient in the cost function of the first full articulamentum and the second full articulamentum.
Fig. 3 is the schematic diagram of recognition of face machine people according to embodiments of the present invention a kind of, as shown in figure 3, the device packet It includes:Image collecting device 31, for acquiring pending recognition of face image;Face identification device 32.
In the above-described embodiments, recognition of face machine people acquires pending recognition of face image by image collecting device, Again by extraction unit, to the pending recognition of face image for carrying out recognition of face is needed to carry out a feature extraction, and pass through Detection unit and calibration unit carry out recognition of face and face respectively according to the feature of the pending recognition of face image of extraction Face are demarcated, wherein, by detection unit, whether pending recognition of face image is detected comprising human face region, obtains the first knot Fruit;By demarcating unit, the face region of face is demarcated in pending recognition of face image, obtains second as a result, passing through again Integral unit integrates the first result obtained and the second result, obtains the recognition result of pending recognition of face image. The face that recognition of face and face are carried out by carrying out a feature extraction to pending recognition of face image are demarcated, without right Pending recognition of face image carries out multiple feature extraction, can accelerate to carry out pending recognition of face image recognition of face and The speed of the face calibration of face, solves and carries out the technical issues of recognition of face program is slow using convolutional neural networks.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
In the above embodiment of the present invention, all emphasize particularly on different fields to the description of each embodiment, do not have in some embodiment The part of detailed description may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed technology contents can pass through others Mode is realized.Wherein, the apparatus embodiments described above are merely exemplary, such as the division of the unit, Ke Yiwei A kind of division of logic function, can there is an other dividing mode in actual implementation, for example, multiple units or component can combine or Person is desirably integrated into another system or some features can be ignored or does not perform.Another point, shown or discussed is mutual Between coupling, direct-coupling or communication connection can be INDIRECT COUPLING or communication link by some interfaces, unit or module It connects, can be electrical or other forms.
The unit illustrated as separating component may or may not be physically separate, be shown as unit The component shown may or may not be physical unit, you can be located at a place or can also be distributed to multiple On unit.Some or all of unit therein can be selected according to the actual needs to realize the purpose of this embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also That each unit is individually physically present, can also two or more units integrate in a unit.Above-mentioned integrated list The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and is independent product sale or uses When, it can be stored in a computer read/write memory medium.Based on such understanding, technical scheme of the present invention is substantially The part to contribute in other words to the prior art or all or part of the technical solution can be in the form of software products It embodies, which is stored in a storage medium, is used including some instructions so that a computer Equipment (can be personal computer, server or network equipment etc.) perform each embodiment the method for the present invention whole or Part steps.And aforementioned storage medium includes:USB flash disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited Reservoir (RAM, Random Access Memory), mobile hard disk, magnetic disc or CD etc. are various can to store program code Medium.
The above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications also should It is considered as protection scope of the present invention.

Claims (11)

1. a kind of face identification method, which is characterized in that including:
Extract the characteristics of image in pending recognition of face image;
Based on the characteristics of image in the pending recognition of face image, detect whether the pending recognition of face image includes Human face region obtains first as a result, wherein, first result is used to indicate in the pending recognition of face image and whether wraps Containing human face region;
Based on the characteristics of image in the pending recognition of face image, detecting whether the pending recognition of face image wraps While containing human face region, demarcates the face region of the pending recognition of face image, obtains second as a result, wherein, described Second result is used to demarcate the face region in the pending recognition of face image;
First result and second result are subjected to integration processing, obtain the identification of the pending recognition of face image As a result.
2. according to the method described in claim 1, it is characterized in that,
Before the characteristics of image in extracting pending recognition of face image, the method further includes:Establish convolutional neural networks, Wherein, the convolutional neural networks include the first full articulamentum and the second full articulamentum;
Detect whether the pending recognition of face image includes comprising human face region:Utilize the described first full articulamentum detection institute Pending recognition of face image is stated whether comprising human face region, obtains first result;
The face region for demarcating the pending recognition of face image includes:Using treated described in the described second full articulamentum calibration into The face region of row recognition of face image obtains second result.
3. according to the method described in claim 2, it is characterized in that, in the first full articulamentum record have face characteristic, profit Detect whether the pending recognition of face image includes comprising human face region with the described first full articulamentum:
It whether detects in the characteristics of image of the pending recognition of face image comprising the face characteristic;
If it is detected that in the characteristics of image of the pending recognition of face image include the face characteristic, it is determined that it is described treat into Row recognition of face image includes the human face region;
If it is detected that the face characteristic is not included in the characteristics of image of the pending recognition of face image, it is determined that described to treat It carries out recognition of face image and does not include the human face region.
4. according to the method described in claim 2, it is characterized in that, in the second full articulamentum record have grader, it is described Record has face feature in grader, and the face area of the pending recognition of face image is demarcated using the described second full articulamentum Domain includes:
By the grader, the characteristics of image in the pending recognition of face image is included into and the face feature respectively Corresponding classification, wherein, each classification corresponds to an organic region in the face region.
5. according to the method described in claim 2, include it is characterized in that, establishing convolutional neural networks:
First full articulamentum and the second full articulamentum are set;
Obtain face characteristic and face feature;
The described first full articulamentum is trained using the face characteristic, obtains first-loss value, and use the face Feature is trained the described second full articulamentum, obtains the second penalty values;
The first-loss value and second penalty values are superimposed, obtain the penalty values of the convolutional neural networks;
The parameter in the convolutional neural networks is trained using the penalty values, the convolutional Neural net after being trained Network, wherein, the parameter includes each coefficient in the cost function of the described first full articulamentum and the second full articulamentum.
6. a kind of face identification device, which is characterized in that including:
Extraction unit, for extracting the characteristics of image in pending recognition of face image;
Detection unit, for based on the characteristics of image in the pending recognition of face image, the detection pending face to be known Whether other image obtains first as a result, wherein, first result be used to indicate the pending face know comprising human face region Whether human face region is included in other image;
Unit is demarcated, for based on the characteristics of image in the pending recognition of face image, detecting the pending face While identifying whether image includes human face region, the face region of the pending recognition of face image is demarcated, obtains second As a result, wherein, second result is used to demarcate the face region in the pending recognition of face image;
First result and second result are carried out integration processing, obtain the pending recognition of face by integral unit The recognition result of image.
7. device according to claim 6, which is characterized in that
Described device further includes:Creation module, for establishing convolutional neural networks, wherein, the convolutional neural networks include the One full articulamentum and the second full articulamentum;
The detection unit includes:Detection module, for detecting the pending recognition of face using the described first full articulamentum Whether image obtains first result comprising human face region;
The calibration unit includes:Demarcating module, for demarcating the pending recognition of face using the described second full articulamentum The face region of image obtains second result.
8. device according to claim 7, which is characterized in that recorded in the first full articulamentum that the creation module is established There is face characteristic, the detection module includes:
Detection sub-module, it is whether special comprising the face in the characteristics of image of the pending recognition of face image for detecting Sign;
First determining module, it is special for including the face in the characteristics of image for detecting the pending recognition of face image In the case of sign, determine that the pending recognition of face image includes the human face region;
Second determining module, for not including the face in the characteristics of image for detecting the pending recognition of face image In the case of feature, determine that the pending recognition of face image does not include the human face region.
9. device according to claim 7, which is characterized in that recorded in the second full articulamentum that the creation module is established There is grader, being recorded in the grader has face feature, and the demarcating module includes:
Module is included into, for passing through the grader, the characteristics of image in the pending recognition of face image is included into respectively Classification corresponding with the face feature, wherein, each classification corresponds to an organic region in the face region.
10. device according to claim 7, which is characterized in that the creation module includes:
Setup module, for setting the first full articulamentum and the second full articulamentum;
Acquisition module, for obtaining face characteristic and face feature;
First training module for being trained using the face characteristic to the described first full articulamentum, obtains first-loss Value, and the described second full articulamentum is trained using the face feature, obtain the second penalty values;
Laminating module for the first-loss value and second penalty values to be superimposed, obtains the convolutional neural networks Penalty values;
Second training module for being trained using the penalty values to the parameter in the convolutional neural networks, is instructed Convolutional neural networks after white silk, wherein, the parameter includes the cost of the described first full articulamentum and the second full articulamentum Each coefficient in function.
11. a kind of robot, which is characterized in that including:
Image collecting device, for acquiring pending recognition of face image;Face in claim 6 to 10 described in any one Identification device.
CN201611263999.4A 2016-12-30 2016-12-30 Face identification method, device and robot Pending CN108268822A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611263999.4A CN108268822A (en) 2016-12-30 2016-12-30 Face identification method, device and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611263999.4A CN108268822A (en) 2016-12-30 2016-12-30 Face identification method, device and robot

Publications (1)

Publication Number Publication Date
CN108268822A true CN108268822A (en) 2018-07-10

Family

ID=62755320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611263999.4A Pending CN108268822A (en) 2016-12-30 2016-12-30 Face identification method, device and robot

Country Status (1)

Country Link
CN (1) CN108268822A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110186167A (en) * 2019-05-31 2019-08-30 广东美的制冷设备有限公司 Control method, device, air conditioner and the storage medium of air conditioner
CN110987189A (en) * 2019-11-21 2020-04-10 北京都是科技有限公司 Method, system and device for detecting temperature of target object
CN111898622A (en) * 2019-05-05 2020-11-06 阿里巴巴集团控股有限公司 Information processing method, information display method, model training method, information display system, model training system and equipment
CN113239885A (en) * 2021-06-04 2021-08-10 新大陆数字技术股份有限公司 Face detection and recognition method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1967562A (en) * 2005-11-15 2007-05-23 中华电信股份有限公司 Facial identification method based on human facial features identification
CN105354565A (en) * 2015-12-23 2016-02-24 北京市商汤科技开发有限公司 Full convolution network based facial feature positioning and distinguishing method and system
CN105469041A (en) * 2015-11-19 2016-04-06 上海交通大学 Facial point detection system based on multi-task regularization and layer-by-layer supervision neural networ
US20160140436A1 (en) * 2014-11-15 2016-05-19 Beijing Kuangshi Technology Co., Ltd. Face Detection Using Machine Learning
CN105912990A (en) * 2016-04-05 2016-08-31 深圳先进技术研究院 Face detection method and face detection device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1967562A (en) * 2005-11-15 2007-05-23 中华电信股份有限公司 Facial identification method based on human facial features identification
US20160140436A1 (en) * 2014-11-15 2016-05-19 Beijing Kuangshi Technology Co., Ltd. Face Detection Using Machine Learning
CN105469041A (en) * 2015-11-19 2016-04-06 上海交通大学 Facial point detection system based on multi-task regularization and layer-by-layer supervision neural networ
CN105354565A (en) * 2015-12-23 2016-02-24 北京市商汤科技开发有限公司 Full convolution network based facial feature positioning and distinguishing method and system
CN105912990A (en) * 2016-04-05 2016-08-31 深圳先进技术研究院 Face detection method and face detection device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANG S, LUO P, LOY C C: "《From Facial Parts Responses to Face Detection: A Deep》", 《IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
邵蔚元; 郭跃飞: "《多任务学习及卷积神经网络在人脸识别中的应用》", 《计算机工程与应用》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111898622A (en) * 2019-05-05 2020-11-06 阿里巴巴集团控股有限公司 Information processing method, information display method, model training method, information display system, model training system and equipment
CN111898622B (en) * 2019-05-05 2022-07-15 阿里巴巴集团控股有限公司 Information processing method, information display method, model training method, information display system, model training system and equipment
CN110186167A (en) * 2019-05-31 2019-08-30 广东美的制冷设备有限公司 Control method, device, air conditioner and the storage medium of air conditioner
CN110987189A (en) * 2019-11-21 2020-04-10 北京都是科技有限公司 Method, system and device for detecting temperature of target object
CN110987189B (en) * 2019-11-21 2021-11-02 北京都是科技有限公司 Method, system and device for detecting temperature of target object
CN113239885A (en) * 2021-06-04 2021-08-10 新大陆数字技术股份有限公司 Face detection and recognition method and system

Similar Documents

Publication Publication Date Title
CN107657249A (en) Method, apparatus, storage medium and the processor that Analysis On Multi-scale Features pedestrian identifies again
CN105976400B (en) Method for tracking target and device based on neural network model
CN111340008B (en) Method and system for generation of counterpatch, training of detection model and defense of counterpatch
CN108268822A (en) Face identification method, device and robot
CN106778684A (en) deep neural network training method and face identification method
CN106372662A (en) Helmet wearing detection method and device, camera, and server
CN110738101A (en) Behavior recognition method and device and computer readable storage medium
CN107862282A (en) A kind of finger vena identification and safety certifying method and its terminal and system
CN110059546A (en) Vivo identification method, device, terminal and readable medium based on spectrum analysis
CN109670441A (en) A kind of realization safety cap wearing knows method for distinguishing, system, terminal and computer readable storage medium
CN109359666A (en) A kind of model recognizing method and processing terminal based on multiple features fusion neural network
CN107194361A (en) Two-dimentional pose detection method and device
CN110287889A (en) A kind of method and device of identification
CN108256404A (en) Pedestrian detection method and device
CN109902660A (en) A kind of expression recognition method and device
CN107578034A (en) information generating method and device
CN107967442A (en) A kind of finger vein identification method and system based on unsupervised learning and deep layer network
CN108009481A (en) A kind of training method and device of CNN models, face identification method and device
CN110619316A (en) Human body key point detection method and device and electronic equipment
CN115880558B (en) Farming behavior detection method and device, electronic equipment and storage medium
CN111914665A (en) Face shielding detection method, device, equipment and storage medium
CN108614987A (en) The method, apparatus and robot of data processing
CN109117746A (en) Hand detection method and machine readable storage medium
CN110069983A (en) Vivo identification method, device, terminal and readable medium based on display medium
CN108268823A (en) Target recognition methods and device again

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20180822

Address after: 311100 1101, room 14, 1008, Longxiang street, Cang Qian street, Yuhang District, Hangzhou, Zhejiang.

Applicant after: Hangzhou Institute of artificial intelligence

Address before: 518000 Guangdong, Shenzhen, Nanshan District, Nanhai Road, West Guangxi Temple Road North Sunshine Huayi Building 1 15D-02F

Applicant before: Shenzhen Guangqi Hezhong Technology Co., Ltd.

Applicant before: Shenzhen Kuang-Chi Innovation Technology Co., Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180710