CN108446658A - The method and apparatus of facial image for identification - Google Patents

The method and apparatus of facial image for identification Download PDF

Info

Publication number
CN108446658A
CN108446658A CN201810264669.XA CN201810264669A CN108446658A CN 108446658 A CN108446658 A CN 108446658A CN 201810264669 A CN201810264669 A CN 201810264669A CN 108446658 A CN108446658 A CN 108446658A
Authority
CN
China
Prior art keywords
key position
facial image
rectangle frame
predetermined size
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810264669.XA
Other languages
Chinese (zh)
Inventor
张刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810264669.XA priority Critical patent/CN108446658A/en
Publication of CN108446658A publication Critical patent/CN108446658A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/30Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

This application discloses a kind of method and apparatus of facial image for identification.Method includes:Obtain the facial image of the rectangle frame mark key position using predetermined size;The convolutional layer that facial image is inputted to convolutional neural networks, obtains the feature vector for each key position that convolutional layer is extracted;The feature vector of each key position is sequentially connected, facial characteristics vector is obtained;Obtain the distance between location information and each two key position of each key position information;Location information, range information and facial characteristics vector are input to the full articulamentum of neural network, the facial image feature exported.This method can improve the accuracy of the facial image feature of output.

Description

The method and apparatus of facial image for identification
Technical field
This application involves field of computer technology, and in particular to technical field of the computer network more particularly, to identifies The method and apparatus of facial image.
Background technology
Recognition of face is a kind of biological identification technology that the facial feature information based on people carries out identification.With camera shooting Machine or camera acquire image or video flowing containing face, and automatic detect and track face in the picture, and then to detection The face arrived carries out image preprocessing, image characteristics extraction and a series of the relevant technologies such as matching and identification of face, usually Also referred to as Identification of Images or face recognition.
Current face identification method, carry out usually requiring to navigate to when graphic feature extraction on face it is multiple (such as 88 or 68) key point, and then face characteristic is extracted by the way that these key points are inputted machine learning model, it is finally based on and carries The face characteristic that takes identifies facial image.
Invention content
The embodiment of the present application proposes a kind of method and apparatus of facial image for identification.
In a first aspect, the embodiment of the present application provides a kind of method of facial image for identification, including:It obtains using pre- The facial image for the rectangle frame mark key position being sized;The convolutional layer that facial image is inputted to convolutional neural networks, obtains The feature vector for each key position that convolutional layer is extracted;The feature vector of each key position is sequentially connected, facial spy is obtained Sign vector;Obtain the distance between location information and each two key position of each key position information;By location information, away from From the full articulamentum that information and facial characteristics vector are input to neural network, the facial image feature exported.
In some embodiments, it include following at least two using the rectangle frame of predetermined size mark key position:With pupil Centered on the center in hole, using the left eye eyeball of predetermined size rectangle frame mark;Centered on the center of pupil, use The right eye eyeball of predetermined size rectangle frame mark;Centered on nose, using the nose of predetermined size rectangle frame mark;With the left corners of the mouth Centered on, using the face of predetermined size rectangle frame mark;Centered on the right corners of the mouth, using the mouth of predetermined size rectangle frame mark Bar.
In some embodiments, the location information of each key position includes:Rectangle frame for marking each key position The coordinate information at center.
In some embodiments, the distance between each two key position information includes:The center of each two rectangle frame it Between range information.
In some embodiments, method further includes:Facial image feature based on output and the face in presetting database The similarity of characteristics of image determines the recognition result to facial image.
Second aspect, the embodiment of the present application provide a kind of device of facial image for identification, including:Image obtains single Member, the facial image for obtaining the rectangle frame mark key position using predetermined size;Vectorial extraction unit is used for face Image inputs the convolutional layer of convolutional neural networks, obtains the feature vector for each key position that convolutional layer is extracted;Vector connection Unit obtains facial characteristics vector for the feature vector of each key position to be sequentially connected;Information acquisition unit, for obtaining Take the distance between location information and each two key position of each key position information;Feature output unit is used for position Confidence breath, range information and facial characteristics vector are input to the full articulamentum of neural network, the facial image feature exported.
In some embodiments, include following using the rectangle frame of predetermined size mark key position in image acquisition unit At least two:Centered on the center of pupil, using the left eye eyeball of predetermined size rectangle frame mark;With the centre bit of pupil It is set to center, the right eye eyeball marked using predetermined size rectangle frame;Centered on nose, using predetermined size rectangle frame mark Nose;Centered on the left corners of the mouth, using the face of predetermined size rectangle frame mark;Centered on the right corners of the mouth, using predetermined size The face of rectangle frame mark.
In some embodiments, the location information of each key position acquired in information acquisition unit includes:For marking The coordinate information at the center of the rectangle frame of each key position.
In some embodiments, the distance between each two key position acquired in information acquisition unit information includes: The distance between the center of each two rectangle frame information.
In some embodiments, device further includes:Recognition result determination unit is used for the facial image feature based on output With the similarity of the facial image feature in presetting database, the recognition result to facial image is determined.
The third aspect, the embodiment of the present application provide a kind of equipment, including:One or more processors;Storage device is used In the one or more programs of storage;When one or more programs are executed by one or more processors so that at one or more Manage the method that device realizes a kind of as above any facial image for identification.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should The method that a kind of as above any facial image for identification is realized when program is executed by processor.
The method and apparatus of facial image for identification provided by the embodiments of the present application are obtained first using predetermined size Rectangle frame marks the facial image of key position;Later, the convolutional layer that facial image is inputted to convolutional neural networks, obtains convolution The feature vector for each key position that layer is extracted;Later, the feature vector of each key position is sequentially connected, obtains facial spy Sign vector;Later, the distance between location information and each two key position of each key position information is obtained;Finally, will Location information, range information and facial characteristics vector are input to the full articulamentum of neural network, and the facial image exported is special Sign.In this course, the feature vector of each key position can be sequentially connected, the sequence to retain each key position is closed System, and it is special using the distance between the location information of each key position and each two key position information as the facial image of output The constraint of sign improves the accuracy of the facial image feature of output.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 shows that the embodiment of the present application can be applied to exemplary system architecture therein;
Fig. 2 is the schematic flow according to one embodiment of the method for the facial image for identification of the embodiment of the present application Figure;
Fig. 3 is the exemplary application field according to the embodiment of the method for the facial image for identification of the embodiment of the present application Scape;
Fig. 4 is the schematic stream according to another embodiment of the method for the facial image for identification of the embodiment of the present application Cheng Tu;
Fig. 5 is the example arrangement according to one embodiment of the device of the facial image for identification of the embodiment of the present application Figure;
Fig. 6 is adapted for the structural representation of the computer system for the terminal device or server of realizing the embodiment of the present application Figure.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Referring to FIG. 1, Fig. 1 shows the method that can apply the facial image for identification of the application or for identification people The exemplary system architecture 100 of the embodiment of the device of face image.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105, 106.Network 104 between terminal device 101,102,103 and server 105,106 provide communication link medium.Net Network 104 may include various connection types, such as wired, wireless communication link or fiber optic cables etc..
User 110 can be interacted by network 104 with server 105,106 with using terminal equipment 101,102,103, to connect Receive or send message etc..Various telecommunication customer end applications can be installed, such as shooting class is answered on terminal device 101,102,103 With the application of, search engine class, the application of shopping class, instant messaging tools, mailbox client, social platform software, video playback class Using etc..
Terminal device 101,102,103 can be hardware, can also be software.When terminal device 101,102,103 is hard Can be the various electronic equipments with display screen, including but not limited to smart mobile phone, tablet computer, e-book reading when part (Moving Picture Experts Group Audio Layer III, dynamic image expert compress mark for device, MP3 player Quasi- audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression Standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal device 101,102,103 is When software, it may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or software module (such as with To provide Distributed Services), single software or software module can also be implemented as.It is not specifically limited herein.
Server 105,106 can be to provide the server of various services.Such as server 105,106 can be to terminal Equipment 101,102,103 provides the background server supported.The data that background server can submit terminal are analyzed, are deposited The processing such as storage or calculating, and will terminal device be pushed to using the data processed result that machine learning task obtains.
Under normal conditions, the method for the facial image for identification that the embodiment of the present application is provided generally by server 105, 106 execute, and correspondingly, the device of facial image is generally positioned in server 105,106 for identification.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.
With further reference to Fig. 2, Fig. 2 shows according to one of the method for the facial image for identification of the embodiment of the present application The schematic flow chart of embodiment.
As shown in Fig. 2, the method 200 of facial image includes for identification:
In step 210, the facial image of the rectangle frame mark key position using predetermined size is obtained.
In the present embodiment, for identification the method for facial image executive agent (such as server shown in FIG. 1 105, 106) the acquired original image submitted from terminal device (such as terminal device 101,102,103 shown in Fig. 1) can be received, Such as still image, the dynamic image etc. of different location, different expressions.
Acquired original image at this time tends not to directly use due to being limited by various conditions and random disturbances, The image preprocessings such as gray correction, noise filtering must be carried out to it in the early stage of image procossing.For facial image Speech, preprocessing process include mainly the light compensation of facial image, greyscale transformation, histogram equalization, normalization, geometry school Just, filter and sharpen etc..
After being pre-processed to acquired original image, the crucial portion that detection algorithm detects facial image may be used Dimensioning algorithm may be used with the key point (example of key position later in position, such as eyebrow, eyes, nose, face, ear etc. Such as brows, eyebrow tail, inner eye corner, the tail of the eye, pupil center, nose, the left corners of the mouth, the right corners of the mouth, left ear vertex, auris dextra vertex) be Center marks the key position with the rectangle frame of predetermined size.It should be appreciated that these are closed using the rectangle frame mark of predetermined size The facial image at key position, rectangle frame therein are possible to area overlapping.
In a step 220, the convolutional layer that facial image is inputted to convolutional neural networks, obtains each pass that convolutional layer is extracted The feature vector at key position.
It in the present embodiment, can be defeated by the above-mentioned facial image for having used the rectangle frame of predetermined size to mark key position Enter the convolutional layer in neural network trained in advance, the feature vector for each key position that convolutional layer is extracted can be obtained.
Here convolutional layer, with N × N, (N is natural number more than 1, such as 2 or the 3) convolution kernel of size.Assuming that each Layer convolutional layer carry out step-length be 1 convolution operation, indicate convolution kernel move right every time a pixel (when being moved to boundary return To left end and move down a unit).The weight of convolution kernel is obtained by study, and the convolution in convolution process The weight of core will not change.There are one have N in weight namely a convolution kernel in each unit of convolution kernel2A weight. During convolution kernel moves, the pixel on picture can be multiplied with the respective weights of convolution kernel, finally by all products Addition obtains an output.Herein, the deeper feature vector of each key position can be obtained using multilayer convolutional layer, Namely each key position of extraction has more the feature of taste.
In step 230, the feature vector of each key position is sequentially connected, obtains facial characteristics vector.
In the present embodiment, by the feature vector of each key position be connected in turn obtained facial characteristics to Amount, can retain the order information of each key position.
In step 240, the distance between location information and each two key position of each key position information is obtained.
In the present embodiment, can believe the key point information of above-mentioned each key position as the position of each key position Breath, and for each two key position, the distance between the key point of two key positions information is sought, and the distance is believed Breath is as the distance between each two key position information.If for example, using the inner eye corner of two eyes as the pass of eyes Key point, then the location information of this inner eye corner is the location information of eyes, the distance between two eyes information be this two A inner eye corner the distance between information.
In step 250, location information, range information and facial characteristics vector the complete of convolutional neural networks is input to connect Layer is connect, the facial image feature exported.
In the present embodiment, full articulamentum (fully connected layers, FC) typically occurs in convolutional Neural net Network it is last several layers of, for doing weighted sum to the feature that front inputs, so as to played in entire convolutional neural networks " classification The effect of device ".If the operations such as convolutional layer are if initial data is mapped to hidden layer feature space, later to believe position Breath, the constraint of this two class of range information and facial characteristics vector are input to full articulamentum, and can acquire convolutional layer is " distributed Character representation " and this two class constraint of location information and range information are respectively mapped to sample labeling space.
Optionally, in step 260, the facial image feature based on output and the facial image feature in presetting database Similarity, determine to the recognition result of facial image.
It in the present embodiment, can be by itself and present count for the facial image feature of full articulamentum output in step 250 It is compared according to the default facial image feature in library, if the facial image feature of output is similar to default facial image feature Degree is higher than predetermined threshold, then corresponding to the facial image of the facial image feature of the output and corresponding to the default facial image The facial image of feature is same facial image.
The method for the facial image for identification that the above embodiments of the present application provide, by by location information, range information And the full articulamentum of the facial characteristics vector input convolutional neural networks of the order information of each key position is remained with, it can improve The accuracy of the facial image feature of output, and then improve the accuracy of identification to facial image.
Further, referring to FIG. 3, the method that Fig. 3 shows the facial image for identification according to the embodiment of the present application An exemplary application scene.
As shown in figure 3, the method 310 of facial image is run in electronic equipment 320 for identification, method includes:
First, the facial image 301 of the rectangle frame mark key position using predetermined size is obtained.
Later, the convolutional layer 302 that facial image 301 is inputted to convolutional neural networks, obtains each pass that convolutional layer is extracted The feature vector 303 at key position.
Later, the feature vector 303 of each key position is sequentially connected, obtains facial characteristics vector 304.
Later, the distance between location information 305 and each two key position of each key position information 306 is obtained.
Finally, location information 305, range information 306 and facial characteristics vector 304 are input to the complete of convolutional neural networks Articulamentum 307, the facial image feature 308 exported.
It should be appreciated that the method for the facial image for identification shown in above-mentioned Fig. 3, only facial image for identification Method exemplary application scene, do not represent the restriction to the application.For example, square of the above-mentioned acquisition using predetermined size Shape collimation mark notes the facial image of key position, multiple steps for further refining may be used to complete, details are not described herein.It answers Work as understanding, the method for the facial image for identification provided in the above application scene of the application can improve the face of output The accuracy of characteristics of image, and then improve the accuracy of identification to facial image.
Further, referring to FIG. 4, the method that Fig. 4 shows the facial image for identification according to the embodiment of the present application Another embodiment schematic flow chart.
As shown in figure 4, the method 400 of facial image includes for identification:
In step 410, the facial image that following key position is marked using the rectangle frame of predetermined size is obtained:With pupil Center centered on, using predetermined size rectangle frame mark left eye eyeball;Centered on the center of pupil, using pre- It is sized the right eye eyeball of rectangle frame mark;Centered on nose, using the nose of predetermined size rectangle frame mark;It is with the left corners of the mouth Center, the face marked using predetermined size rectangle frame;And centered on the right corners of the mouth, using predetermined size rectangle frame mark Face.
In the present embodiment, it using pupil of left eye, pupil of right eye, nose, the left corners of the mouth and the right corners of the mouth as key point, is respectively adopted The rectangle frame of predetermined size marks the key position centered on these key points, can be utilized the rectangle frame of predetermined size Mark the facial image of key position.
At step 420, the convolutional layer that facial image is inputted to convolutional neural networks, obtains each pass that convolutional layer is extracted The feature vector at key position.
It in the present embodiment, can be defeated by the above-mentioned facial image for having used the rectangle frame of predetermined size to mark key position Enter the convolutional layer in neural network trained in advance, the feature vector for each key position that convolutional layer is extracted can be obtained.
Here convolutional layer, with N × N, (N is natural number more than 1, such as 2 or the 3) convolution kernel of size.Assuming that each Layer convolutional layer carry out step-length be 1 convolution operation, indicate convolution kernel move right every time a pixel (when being moved to boundary return To left end and move down a unit).The weight of convolution kernel is obtained by study, and the convolution in convolution process The weight of core will not change.There are one have N in weight namely a convolution kernel in each unit of convolution kernel2A weight. During convolution kernel moves, the pixel on picture can be multiplied with the respective weights of convolution kernel, finally by all products Addition obtains an output.Herein, the deeper feature vector of each key position can be obtained using multilayer convolutional layer, Namely each key position of extraction has more the feature of taste.
In step 430, the feature vector of each key position is sequentially connected, obtains facial characteristics vector.
In the present embodiment, the feature vector of each key position is connected in turn, can obtains remaining each pass The facial characteristics vector of the order information at key position.
In step 440, the coordinate information at the center of the rectangle frame for marking each key position is obtained.
In the present embodiment, due to being divided using pupil of left eye, pupil of right eye, nose, the left corners of the mouth and the right corners of the mouth as key point Therefore the key position centered on these key points for not using the rectangle frame of predetermined size to mark marks each key position Rectangle frame central point namely pupil of left eye, pupil of right eye, nose, the left corners of the mouth and the right corners of the mouth this five key points, and be used for The centre coordinate information of the rectangle frame of each key position is marked, namely in this coordinate information of five key points in the picture.
In step 450, the distance between the center of each two rectangle frame information is obtained.
In the present embodiment, since the center of each rectangle frame is respectively pupil of left eye, pupil of right eye, nose, the left corners of the mouth With this five key points of the right corners of the mouth, the distance between the center of each two rectangle frame information is obtained, namely obtain this five keys The distance between each two key point information in point.Here range information, may be used and calculate range information in the prior art Algorithm determine, for example, by using the algorithm, the algorithm for calculating manhatton distance, the calculation for calculating Min formula distance for calculating Euclidean distance Method etc. determines the range information.
In step 460, by the coordinate information at the center of the rectangle frame for marking each key position, each two rectangle frame The distance between center information and facial characteristics vector be input to the full articulamentums of convolutional neural networks, the face exported Characteristics of image.
It in the present embodiment, can be by each rectangle frame while facial characteristics vector is input to convolutional neural networks The coordinate information at center and the distance between the center information of each two rectangle frame input full articulamentum as constraint, to defeated Go out more accurate facial image feature.
It should be appreciated that the method for the facial image for identification shown in above-mentioned Fig. 4, only facial image for identification Method exemplary embodiment, the restriction to the application is not represented, for example, acquisition in the application step 410 is using pre- The facial image for the rectangle frame mark key position being sized, key position therein can be 95 passes of reference man's face image The key position that key point, 85 key points or 68 key points etc. determine.
With further reference to Fig. 5, as an implementation of the above method, the embodiment of the present application provides a kind of face for identification One embodiment of the device of image, this is for identification for knowing shown in the embodiment and Fig. 1 to Fig. 4 of the device of facial image The embodiment of the method for others' face image is corresponding, as a result, above with respect to the method for facial image for identification in Fig. 1 to Fig. 4 The operation of description and feature are equally applicable to the device 500 of facial image for identification and unit wherein included, herein no longer It repeats.
As shown in figure 5, the device 500 of the facial image for identification may include:Image acquisition unit 510, for obtaining Take the facial image of the rectangle frame mark key position using predetermined size;Vectorial extraction unit 520, for facial image is defeated The convolutional layer for entering convolutional neural networks obtains the feature vector for each key position that convolutional layer is extracted;Vectorial connection unit 530, for the feature vector of each key position to be sequentially connected, obtain facial characteristics vector;Information acquisition unit 540, is used for Obtain the distance between location information and each two key position of each key position information;Feature output unit 550, is used for Location information, range information and facial characteristics vector are input to the full articulamentum of neural network, the facial image exported Feature.
In some optional realization methods of the present embodiment, the rectangle frame of predetermined size is used in image acquisition unit 510 It includes following at least two to mark key position:Centered on the center of pupil, using predetermined size rectangle frame mark Left eye eyeball;Centered on the center of pupil, using the right eye eyeball of predetermined size rectangle frame mark;Centered on nose, adopt The nose marked with predetermined size rectangle frame;Centered on the left corners of the mouth, using the face of predetermined size rectangle frame mark;With right mouth Centered on angle, using the face of predetermined size rectangle frame mark.
In some optional realization methods of the present embodiment, the position of each key position acquired in information acquisition unit 540 Confidence ceases:The coordinate information at the center of the rectangle frame for marking each key position.
In some optional realization methods of the present embodiment, each two key position acquired in information acquisition unit 540 The distance between information include:The distance between the center of each two rectangle frame information.
In some optional realization methods of the present embodiment, device further includes:Recognition result determination unit 560 is used for base In the similarity of the facial image feature and the facial image feature in presetting database of output, the identification to facial image is determined As a result.
Present invention also provides a kind of embodiments of equipment, including:One or more processors;Storage device, for depositing The one or more programs of storage;When one or more programs are executed by one or more processors so that one or more processors The method for realizing the facial image for identification described in any one as above.
Present invention also provides a kind of embodiments of computer-readable medium, are stored thereon with computer program, the program The method that the facial image for identification described in any one as above is realized when being executed by processor.
Below with reference to Fig. 6, it illustrates the calculating suitable for terminal device or server for realizing the embodiment of the present application The structural schematic diagram of machine system 600.Terminal device shown in Fig. 6 is only an example, should not be to the work(of the embodiment of the present application Any restrictions can be brought with use scope.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 606 and Execute various actions appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data. CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always Line 604.
It is connected to I/O interfaces 605 with lower component:Importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 608 including hard disk etc.; And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because The network of spy's net executes communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 610, as needed in order to be read from thereon Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, the computer program includes the program code for method shown in execution flow chart.Such In embodiment, which can be downloaded and installed by communications portion 609 from network, and/or is situated between from detachable Matter 611 is mounted.When the computer program is executed by central processing unit (CPU) 601, executes and limited in the present processes Above-mentioned function.
It should be noted that computer-readable medium described herein can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two arbitrarily combines.Computer-readable signal media for example can be --- but not Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or arbitrary above combination.Meter The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more conducting wires, just It takes formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type and may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In this application, can be any include computer readable storage medium or storage journey The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at this In application, computer-readable signal media may include in a base band or as the data-signal that a carrier wave part is propagated, Wherein carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but unlimited In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for By instruction execution system, device either device use or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned Any appropriate combination.
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part for the part of one unit of table, program segment or code, the unit, program segment or code includes one or more Executable instruction for implementing the specified logical function.It should also be noted that in some implementations as replacements, institute in box The function of mark can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are practical On can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it wants It is noted that the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart, Ke Yiyong The dedicated hardware based system of defined functions or operations is executed to realize, or can be referred to specialized hardware and computer The combination of order is realized.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet Include image acquisition unit, vectorial extraction unit, vectorial connection unit, information acquisition unit and feature output unit.These units Title do not constitute the restriction to the unit itself under certain conditions, for example, image acquisition unit is also described as " unit for obtaining the facial image of the rectangle frame mark key position using predetermined size ".
As on the other hand, present invention also provides a kind of nonvolatile computer storage media, the non-volatile calculating Machine storage medium can be nonvolatile computer storage media included in device described in above-described embodiment;Can also be Individualism, without the nonvolatile computer storage media in supplying terminal.Above-mentioned nonvolatile computer storage media is deposited One or more program is contained, when one or more of programs are executed by an equipment so that the equipment:It obtains The facial image of key position is marked using the rectangle frame of predetermined size;Facial image is inputted to the convolution of convolutional neural networks Layer, obtains the feature vector for each key position that convolutional layer is extracted;The feature vector of each key position is sequentially connected, is obtained Facial characteristics vector;Obtain the distance between location information and each two key position of each key position information;By position Information, range information and facial characteristics vector are input to the full articulamentum of neural network, the facial image feature exported.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (12)

1. a kind of method of facial image for identification, including:
Obtain the facial image of the rectangle frame mark key position using predetermined size;
The convolutional layer that the facial image is inputted to convolutional neural networks, obtains each key position that the convolutional layer is extracted Feature vector;
The feature vector of each key position is sequentially connected, facial characteristics vector is obtained;
Obtain each key position location information and each two described in the distance between key position information;
The location information, the range information and the facial characteristics vector are input to the full connection of the neural network Layer, the facial image feature exported.
2. according to the method described in claim 1, wherein, it is described use the rectangle frame of predetermined size mark key position include with Under at least two:
Centered on the center of pupil, using the left eye eyeball of predetermined size rectangle frame mark;
Centered on the center of pupil, using the right eye eyeball of predetermined size rectangle frame mark;
Centered on nose, using the nose of predetermined size rectangle frame mark;
Centered on the left corners of the mouth, using the face of predetermined size rectangle frame mark;
Centered on the right corners of the mouth, using the face of predetermined size rectangle frame mark.
3. according to the method described in claim 2, wherein, the location information of each key position includes:It is described for marking The coordinate information at the center of the rectangle frame of each key position.
4. the method according to any one of Claims 2 or 3, wherein between key position described in each two Range information includes:The distance between the center of rectangle frame described in each two information.
5. according to the method described in claim 1, wherein, the method further includes:
The similarity of facial image feature and the facial image feature in presetting database based on the output, determines to described The recognition result of facial image.
6. a kind of device of facial image for identification, including:
Image acquisition unit, the facial image for obtaining the rectangle frame mark key position using predetermined size;
Vectorial extraction unit, the convolutional layer for the facial image to be inputted to convolutional neural networks, obtains the convolutional layer institute The feature vector of each key position of extraction;
Vectorial connection unit obtains facial characteristics vector for the feature vector of each key position to be sequentially connected;
Information acquisition unit, between key position described in the location information and each two for obtaining each key position Range information;
Feature output unit, it is described for the location information, the range information and the facial characteristics vector to be input to The full articulamentum of neural network, the facial image feature exported.
7. device according to claim 6, wherein use the rectangle frame of predetermined size described in described image acquiring unit It includes following at least two to mark key position:
Centered on the center of pupil, using the left eye eyeball of predetermined size rectangle frame mark;
Centered on the center of pupil, using the right eye eyeball of predetermined size rectangle frame mark;
Centered on nose, using the nose of predetermined size rectangle frame mark;
Centered on the left corners of the mouth, using the face of predetermined size rectangle frame mark;
Centered on the right corners of the mouth, using the face of predetermined size rectangle frame mark.
8. device according to claim 7, wherein the position of each key position acquired in described information acquiring unit Confidence ceases:The coordinate information at the center of the rectangle frame for marking each key position.
9. according to the device described in any one of claim 7 or 8, wherein described acquired in described information acquiring unit The distance between key position information described in each two includes:The distance between the center of rectangle frame described in each two information.
10. device according to claim 6, wherein described device further includes:
Recognition result determination unit, it is special for the facial image feature based on the output and the facial image in presetting database The similarity of sign determines the recognition result to the facial image.
11. a kind of equipment, including:
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors so that one or more of processors are real The now method of the facial image for identification as described in any in claim 1-5.
12. a kind of computer-readable medium, is stored thereon with computer program, such as right is realized when which is executed by processor It is required that the method for the facial image for identification described in any in 1-5.
CN201810264669.XA 2018-03-28 2018-03-28 The method and apparatus of facial image for identification Pending CN108446658A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810264669.XA CN108446658A (en) 2018-03-28 2018-03-28 The method and apparatus of facial image for identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810264669.XA CN108446658A (en) 2018-03-28 2018-03-28 The method and apparatus of facial image for identification

Publications (1)

Publication Number Publication Date
CN108446658A true CN108446658A (en) 2018-08-24

Family

ID=63197619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810264669.XA Pending CN108446658A (en) 2018-03-28 2018-03-28 The method and apparatus of facial image for identification

Country Status (1)

Country Link
CN (1) CN108446658A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222566A (en) * 2019-04-30 2019-09-10 北京迈格威科技有限公司 A kind of acquisition methods of face characteristic, device, terminal and storage medium
CN110443015A (en) * 2019-06-28 2019-11-12 北京市政建设集团有限责任公司 Electromechanical equipment control method and control equipment
CN110852221A (en) * 2019-10-30 2020-02-28 深圳智慧林网络科技有限公司 Intelligent face recognition method based on block combination, terminal and storage medium
CN111340004A (en) * 2020-03-27 2020-06-26 北京爱笔科技有限公司 Vehicle image recognition method and related device
CN116416671A (en) * 2023-06-12 2023-07-11 太平金融科技服务(上海)有限公司深圳分公司 Face image correcting method and device, electronic equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1686051A (en) * 2005-05-08 2005-10-26 上海交通大学 Canthus and pupil location method based on VPP and improved SUSAN
CN1731416A (en) * 2005-08-04 2006-02-08 上海交通大学 Method of quick and accurate human face feature point positioning
CN101853523A (en) * 2010-05-18 2010-10-06 南京大学 Method for adopting rough drawings to establish three-dimensional human face molds
CN102194131A (en) * 2011-06-01 2011-09-21 华南理工大学 Fast human face recognition method based on geometric proportion characteristic of five sense organs
CN103605965A (en) * 2013-11-25 2014-02-26 苏州大学 Multi-pose face recognition method and device
CN103984919A (en) * 2014-04-24 2014-08-13 上海优思通信科技有限公司 Facial expression recognition method based on rough set and mixed features
CN105160331A (en) * 2015-09-22 2015-12-16 镇江锐捷信息科技有限公司 Hidden Markov model based face geometrical feature identification method
CN105678235A (en) * 2015-12-30 2016-06-15 北京工业大学 Three dimensional facial expression recognition method based on multiple dimensional characteristics of representative regions
CN105678232A (en) * 2015-12-30 2016-06-15 中通服公众信息产业股份有限公司 Face image feature extraction and comparison method based on deep learning
CN105975935A (en) * 2016-05-04 2016-09-28 腾讯科技(深圳)有限公司 Face image processing method and apparatus
CN106650693A (en) * 2016-12-30 2017-05-10 河北三川科技有限公司 Multi-feature fusion identification algorithm used for human face comparison
CN106980809A (en) * 2016-01-19 2017-07-25 深圳市朗驰欣创科技股份有限公司 A kind of facial feature points detection method based on ASM

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1686051A (en) * 2005-05-08 2005-10-26 上海交通大学 Canthus and pupil location method based on VPP and improved SUSAN
CN1731416A (en) * 2005-08-04 2006-02-08 上海交通大学 Method of quick and accurate human face feature point positioning
CN101853523A (en) * 2010-05-18 2010-10-06 南京大学 Method for adopting rough drawings to establish three-dimensional human face molds
CN102194131A (en) * 2011-06-01 2011-09-21 华南理工大学 Fast human face recognition method based on geometric proportion characteristic of five sense organs
CN103605965A (en) * 2013-11-25 2014-02-26 苏州大学 Multi-pose face recognition method and device
CN103984919A (en) * 2014-04-24 2014-08-13 上海优思通信科技有限公司 Facial expression recognition method based on rough set and mixed features
CN105160331A (en) * 2015-09-22 2015-12-16 镇江锐捷信息科技有限公司 Hidden Markov model based face geometrical feature identification method
CN105678235A (en) * 2015-12-30 2016-06-15 北京工业大学 Three dimensional facial expression recognition method based on multiple dimensional characteristics of representative regions
CN105678232A (en) * 2015-12-30 2016-06-15 中通服公众信息产业股份有限公司 Face image feature extraction and comparison method based on deep learning
CN106980809A (en) * 2016-01-19 2017-07-25 深圳市朗驰欣创科技股份有限公司 A kind of facial feature points detection method based on ASM
CN105975935A (en) * 2016-05-04 2016-09-28 腾讯科技(深圳)有限公司 Face image processing method and apparatus
CN106650693A (en) * 2016-12-30 2017-05-10 河北三川科技有限公司 Multi-feature fusion identification algorithm used for human face comparison

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222566A (en) * 2019-04-30 2019-09-10 北京迈格威科技有限公司 A kind of acquisition methods of face characteristic, device, terminal and storage medium
CN110443015A (en) * 2019-06-28 2019-11-12 北京市政建设集团有限责任公司 Electromechanical equipment control method and control equipment
CN110852221A (en) * 2019-10-30 2020-02-28 深圳智慧林网络科技有限公司 Intelligent face recognition method based on block combination, terminal and storage medium
CN110852221B (en) * 2019-10-30 2023-08-18 深圳智慧林网络科技有限公司 Face intelligent recognition method based on block combination, terminal and storage medium
CN111340004A (en) * 2020-03-27 2020-06-26 北京爱笔科技有限公司 Vehicle image recognition method and related device
CN116416671A (en) * 2023-06-12 2023-07-11 太平金融科技服务(上海)有限公司深圳分公司 Face image correcting method and device, electronic equipment and storage medium
CN116416671B (en) * 2023-06-12 2023-10-03 太平金融科技服务(上海)有限公司深圳分公司 Face image correcting method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108446387A (en) Method and apparatus for updating face registration library
CN108446658A (en) The method and apparatus of facial image for identification
CN109086719A (en) Method and apparatus for output data
CN107644209A (en) Method for detecting human face and device
CN108898185A (en) Method and apparatus for generating image recognition model
CN110188719B (en) Target tracking method and device
CN108388878A (en) The method and apparatus of face for identification
CN108446390A (en) Method and apparatus for pushed information
CN109034069A (en) Method and apparatus for generating information
CN108595628A (en) Method and apparatus for pushed information
CN108345387A (en) Method and apparatus for output information
CN108446385A (en) Method and apparatus for generating information
CN109993150A (en) The method and apparatus at age for identification
CN110348419A (en) Method and apparatus for taking pictures
CN108509892A (en) Method and apparatus for generating near-infrared image
CN109829432A (en) Method and apparatus for generating information
CN108171211A (en) Biopsy method and device
CN109815365A (en) Method and apparatus for handling video
CN109241934A (en) Method and apparatus for generating information
CN108960110A (en) Method and apparatus for generating information
CN108509921A (en) Method and apparatus for generating information
CN108509904A (en) Method and apparatus for generating information
CN109377508A (en) Image processing method and device
CN109214501A (en) The method and apparatus of information for identification
CN108491812A (en) The generation method and device of human face recognition model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination