CN108133201B - Face character recognition methods and device - Google Patents

Face character recognition methods and device Download PDF

Info

Publication number
CN108133201B
CN108133201B CN201810044637.9A CN201810044637A CN108133201B CN 108133201 B CN108133201 B CN 108133201B CN 201810044637 A CN201810044637 A CN 201810044637A CN 108133201 B CN108133201 B CN 108133201B
Authority
CN
China
Prior art keywords
image
face
network
light
face character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810044637.9A
Other languages
Chinese (zh)
Other versions
CN108133201A (en
Inventor
刘经拓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810044637.9A priority Critical patent/CN108133201B/en
Publication of CN108133201A publication Critical patent/CN108133201A/en
Application granted granted Critical
Publication of CN108133201B publication Critical patent/CN108133201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The embodiment of the present application discloses face character recognition methods and device.One specific embodiment of this method includes: to obtain image to be processed, wherein image to be processed is under the conditions of non-frontal uniform source of light to the image of face shooting;Image to be processed is input to image trained in advance and generates model, it obtains carrying out light optimization image adjusted to image to be processed, wherein, optimization image is the facial image presented under the conditions of positive uniform source of light, and image generation model is for adjusting to generate the image under the conditions of positive uniform source of light image progress light captured under the conditions of non-frontal uniform source of light;It will optimize in image input face character identification model trained in advance, obtain the face character information of face in optimization image, wherein face character identification model is for identifying the face character of face in image to obtain face character information.This embodiment improves the accuracys of face character identification.

Description

Face character recognition methods and device
Technical field
The invention relates to field of computer technology, and in particular to field of image processing more particularly to face character Recognition methods and device.
Background technique
With the development of internet technology, face recognition technology has been applied to more and more fields.For example, can pass through Recognition of face carries out face character identification etc., and face character identification is the gender for identifying face, age, race, expression etc. One technology of attribute value.In general, pair in the case where light environment is poor (such as situations such as backlight, sidelight), in image As it is unintelligible, be not easy to recognize, existing mode be usually directly in the image face carry out face character identification.
Summary of the invention
The embodiment of the present application proposes face character recognition methods and device.
In a first aspect, the embodiment of the present application provides a kind of face character recognition methods, comprising: image to be processed is obtained, Wherein, image to be processed is under the conditions of non-frontal uniform source of light to the image of face shooting;Image to be processed is input to pre- First trained image generates model, obtains carrying out light optimization image adjusted to image to be processed, wherein optimizing image is The facial image presented under the conditions of positive uniform source of light, image generate model and are used under the conditions of non-frontal uniform source of light Captured image carries out light and adjusts to generate the image under the conditions of positive uniform source of light;It will optimization image input training in advance Face character identification model in, obtain optimization image in face face character information, wherein face character identification model use The face character of face is identified to obtain face character information in image.
In some embodiments, image generates model training obtains as follows: obtain preset training sample and The production confrontation network pre-established, wherein production confrontation network is sentenced including generation network, the first differentiation network and second Other network generates network and is used to carry out the image of input the image after light adjustment and output adjustment, and first differentiates that network is used In determining whether inputted image is exported by generation network, second differentiates network for determining in the image for generating network output Whether the face character information of face matches with the face character information for being input to face in the image for generating network, generates net The face character information of face is that the image for generating network output is input to face category trained in advance in the image of network output Property identification model obtained in, be input to the face character information of face in the image for generating network and obtain in advance;It utilizes Machine learning method is trained, by the generation net after training based on network, the first differentiation network and the second differentiation network is generated Network is determined as image and generates model.
In some embodiments, training sample includes multiple shooting under the conditions of non-frontal uniform source of light to face The face category of face in first image, the second image and the second image that are shot under the conditions of positive uniform source of light to face Property information.
In some embodiments, using machine learning method, differentiate that network and second differentiates net based on generation network, first Network is trained, and the generation network after training is determined as image and generates model, comprising: executes following training step: fixed life At the parameter of network, using the first image as the input for generating network, the image for generating network output is input to preparatory training Face character identification model in obtain the face character information of face to be identified;The image and the second figure of network output will be generated Input as differentiating network as first, by the face character of face in the face character information of face to be identified and the second image The input that information differentiates network as second instructs the first differentiation network and the second differentiation network using machine learning method Practice;First after fixed training differentiates that network and second differentiates the parameter of network, using the first image as the input of generation network, It is trained using machine learning method, back-propagation method and gradient descent algorithm to network is generated;The after determining training One differentiates that network and second differentiates the loss function value of network, in response to determining that loss function value restrains, by the life after training It is determined as image at network and generates model.
In some embodiments, using machine learning method, differentiate that network and second differentiates net based on generation network, first Network is trained, and the generation network after training is determined as image and generates model, comprising: in response to determining loss function value not Convergence re-executes training step using generation network, the first differentiation network and the second differentiation network after training.
In some embodiments, training sample generates as follows: obtaining the three-dimensional face model pre-established;Point Different light source parameters are not arranged to render three-dimensional face model, obtain first image with different light source parameters and Two images, wherein the light source parameters of the first image are the parameter under the conditions of non-frontal uniform source of light, the light source parameters of the second image For the parameter under the conditions of positive uniform source of light;Second image is input to in advance trained face character identification model, obtains the The face character information of face in two images;By the face character information of face in the first image, the second image and the second image Form training sample.
Second aspect, the embodiment of the present application provide a kind of face character identification device, comprising: first acquisition unit is matched It sets for obtaining image to be processed, wherein image to be processed is under the conditions of non-frontal uniform source of light to the image of face shooting; First input unit is configured to for image to be processed being input to image trained in advance and generates model, obtains to figure to be processed As carrying out light optimization image adjusted, wherein optimization image is the face figure that is presented under the conditions of positive uniform source of light Picture, image generate model and are used to carry out light to captured image under the conditions of non-frontal uniform source of light to adjust to generate front Image under the conditions of uniform source of light;Second input unit, image input face character trained in advance will be optimized by, which being configured to, knows In other model, the face character information of face in optimization image is obtained, wherein face character identification model is used for people in image The face character of face is identified to obtain face character information.
In some embodiments, the device further include: second acquisition unit, be configured to obtain preset training sample and The production confrontation network pre-established, wherein production confrontation network is sentenced including generation network, the first differentiation network and second Other network generates network and is used to carry out the image of input the image after light adjustment and output adjustment, and first differentiates that network is used In determining whether inputted image is exported by generation network, second differentiates network for determining in the image for generating network output Whether the face character information of face matches with the face character information for being input to face in the image for generating network, generates net The face character information of face is that the image for generating network output is input to face category trained in advance in the image of network output Property identification model obtained in, be input to the face character information of face in the image for generating network and obtain in advance;Training Unit is configured to be instructed using machine learning method based on network, the first differentiation network and the second differentiation network is generated Practice, the generation network after training is determined as image and generates model.
In some embodiments, training sample includes multiple shooting under the conditions of non-frontal uniform source of light to face The face category of face in first image, the second image and the second image that are shot under the conditions of positive uniform source of light to face Property information.
In some embodiments, training unit is further configured to: executing following training step: fixed generation network The image for generating network output is input to face category trained in advance using the first image as the input for generating network by parameter The face character information of face to be identified is obtained in property identification model;Using the image and the second image that generate network output as the One differentiate network input, using the face character information of face in the face character information of face to be identified and the second image as Second differentiates the input of network, is trained using machine learning method to the first differentiation network and the second differentiation network;It is fixed First after training differentiates that network and second differentiates the parameter of network, using the first image as the input for generating network, utilizes machine Device learning method, back-propagation method and gradient descent algorithm are trained to network is generated;First after determining training differentiates The loss function value of network and the second differentiation network, in response to determining that loss function value restrains, by the generation network after training It is determined as image and generates model.
In some embodiments, training unit is further configured to: in response to determining that loss function value does not restrain, being made Training step is re-executed with generation network, the first differentiation network and the second differentiation network after training.
In some embodiments, device further include: third acquiring unit is configured to obtain the three-dimensional people pre-established Face model;Setting unit, is configured to different light source parameters are respectively set and renders to three-dimensional face model, is had The first image and the second image of different light source parameters, wherein the light source parameters of the first image are non-frontal uniform source of light condition Under parameter, the light source parameters of the second image are the parameter under the conditions of positive uniform source of light;Third input unit, be configured to by Second image is input to face character identification model trained in advance, obtains the face character information of face in the second image;Group At unit, it is configured to the face character information of face in the first image, the second image and the second image forming training sample.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, comprising: one or more processors;Storage dress It sets, for storing one or more programs, when one or more programs are executed by one or more processors, so that one or more A processor realizes the method such as any embodiment in face character recognition methods.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence realizes the method such as any embodiment in face character recognition methods when the program is executed by processor.
Face character recognition methods provided by the embodiments of the present application and device, by inputting acquired image to be processed Model is generated to image trained in advance, obtains carrying out light optimization image adjusted to the image to be processed, then should Optimization image is input to face character identification model trained in advance, obtains the face character information of face in the optimization image, Hence for light environment image captured by (such as situations such as backlight, sidelight) in the case where poor, it can be accurately determined The attribute information of face improves the accuracy of face character identification.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the face character recognition methods of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the face character recognition methods of the application;
Fig. 4 is the structural schematic diagram according to one embodiment of the face character identification device of the application;
Fig. 5 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the face character recognition methods of the application or the exemplary system of face character identification device System framework 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105. Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out Send message etc..Various telecommunication customer end applications can be installed, such as photography and vedio recording class is answered on terminal device 101,102,103 With the application of, image processing class, the application of recognition of face class, searching class application etc..
Terminal device 101,102,103 can be with camera and support the various electronic equipments of information exchange, packet Include but be not limited to smart phone, tablet computer, pocket computer on knee and desktop computer etc..
Server 105 can be to provide the server of various services, such as to the figure that terminal device 101,102,103 uploads As the image processing server handled.Image processing server the image to be processed etc. received such as can analyze Processing, and processing result (such as face character information) is fed back into terminal device.
It should be noted that face character recognition methods provided by the embodiment of the present application is generally executed by server 105, Correspondingly, face character identification device is generally positioned in server 105.
It should be pointed out that the local of server 105 can also directly store image to be processed, server 105 can be straight It obtains and local image to be processed is taken to carry out face character identification, at this point, exemplary system architecture 100 can there is no terminals to set Standby 101,102,103 and network 104.
It may also be noted that can also be equipped with image processing class application in terminal device 101,102,103, terminal is set Standby 101,102,103 can also be applied based on image processing class to image to be processed progress face character identification, at this point, face category Property recognition methods can also be executed by terminal device 101,102,103, correspondingly, face character identification device also can be set in In terminal device 101,102,103.At this point, server 105 and network 104 can be not present in exemplary system architecture 100.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the process 200 of one embodiment of the face character recognition methods according to the application is shown. The face attribute recognition approach, comprising the following steps:
Step 201, image to be processed is obtained.
In the present embodiment, the electronic equipment of face character recognition methods operation thereon can obtain figure to be processed first Picture, wherein above-mentioned image to be processed can be under the conditions of non-frontal uniform source of light to the image of face shooting.It is right in practice When some target object (such as face, article etc.) is shot, the center of target object is stated from facing up for above-mentioned target object The point light source or area source projected may be considered positive uniform source of light;From the non-frontal of above-mentioned target object or to above-mentioned Non-central the projected point light source or area source of target object may be considered non-frontal uniform source of light.Herein, above-mentioned target The front of object can be target object front (such as face front) to one side, may also mean that target object compared with based on The one side (such as plane shown in cup front view) wanted, can also be any one of the preassigned target object of technical staff Face.The front of above-mentioned target object can be plane shown in the front view of target object, see from the front of above-mentioned target object It examines, on the perspective plane of the image projecting of target object rearward, this projection image is known as front view.In above-mentioned target object The heart can be optic centre, geometric center, point nearest apart from photographic device etc., be also possible to the preassigned mesh of technical staff Some position (such as nose) for marking object, can also be the preassigned target object of technical staff some regions (such as Nose region).Herein, if light source is point light source, front uniform point source is it is to be understood that the point light source goes out luminous point It is vertical with plane where the front view of above-mentioned target object with the line at the center of above-mentioned target object.If light source is area source, Then front uniform area light source is it is to be understood that the line at the center of the area source and the center of above-mentioned target object and the area source Light-emitting surface where plane and above-mentioned target object front view where plane be respectively perpendicular.
It should be noted that above-mentioned image to be processed can be stored directly in the local of above-mentioned electronic equipment, at this point, above-mentioned Electronic equipment directly can obtain above-mentioned image to be processed from local.In addition, above-mentioned image to be processed is also possible to and above-mentioned electricity Remaining electronic equipment that sub- equipment is connected is sent to above-mentioned electronic equipment by wired connection mode or radio connection 's.Wherein, above-mentioned radio connection can include but is not limited to 3G/4G connection, WiFi connection, bluetooth connection, WiMAX company Connect, Zigbee connection, UWB (ultra wideband) connection and other it is currently known or in the future exploitation wireless connection sides Formula.
Step 202, image to be processed is input to image trained in advance and generates model, obtain carrying out image to be processed Light optimization image adjusted.
In the present embodiment, above-mentioned image to be processed can be input to image trained in advance and generated by above-mentioned electronic equipment Model obtains carrying out light optimization image adjusted to image to be processed, wherein above-mentioned optimization image can be positive equal The image presented under even light conditions.
It can be used for it should be noted that image generates model to captured image under the conditions of non-frontal uniform source of light Light adjustment is carried out to generate the image under the conditions of positive uniform source of light.As an example, above-mentioned image generate model can be it is pre- First with machine learning method, based on training sample to the model for carrying out image procossing (for example, convolutional neural networks (Convolutional Neural Network, CNN)) it is trained rear obtained model.Above-mentioned convolutional neural networks can To include convolutional layer, pond layer, anti-pond layer and warp lamination, wherein convolutional layer can be used for extracting characteristics of image, pond layer It can be used for carrying out down-sampled (downsample) to the information of input, anti-pond layer can be used for carrying out the information of input Sample (upsample), warp lamination is used to carry out deconvolution to the information of input, using the transposition of the convolution kernel of convolutional layer as The convolution kernel of warp lamination handles the information inputted.Deconvolution is the inverse operation of convolution, realizes the recovery of signal. The last one warp lamination of above-mentioned convolutional neural networks can export optimization image, and the optimization image exported can use RGB The matrix of (red green blue, RGB) triple channel is expressed, and exported optimization image size can with it is upper It is identical to state image to be processed.In practice, convolutional neural networks are a kind of feedforward neural networks, its artificial neuron can respond Surrounding cells in a part of coverage area have outstanding performance for image procossing, therefore, it is possible to using convolutional neural networks into The processing of row image.It should be noted that can use various modes (such as Training, unsupervised for above-mentioned electronic equipment The modes such as training) train above-mentioned convolutional neural networks to obtain image generation model.
In some optional implementations of the present embodiment, above-mentioned image generates model and can train as follows It obtains:
The first step obtains preset training sample and the production pre-established confrontation network (Generative Adversarial Nets, GAN).For example, above-mentioned production confrontation network, which can be depth convolution, generates confrontation network (Deep Convolutional Generative Adversarial Network, DCGAN).Wherein, above-mentioned production confrontation network can To include generating network, the first differentiation network and the second differentiation network, above-mentioned generation network to can be used for the image inputted Carry out light adjustment and output adjustment after image, it is above-mentioned first differentiation network be determined for inputted image whether by Above-mentioned generation network output, above-mentioned second differentiation network are determined for the people of face in the image of above-mentioned generation network output Face attribute information whether be input to above-mentioned generation network image in the face character information of face match.Herein, on The face character information for stating face in the image for generating network output is input to by the image for exporting above-mentioned generation network In advance obtained in trained face character identification model, the face of the above-mentioned face in the image of above-mentioned generation network Attribute information is the face character information manually marked out got in advance.Wherein, above-mentioned face character identification model can be with It is identified to obtain face character information for the face character to face in image, above-mentioned face character identification model can be It is obtained after carrying out Training to existing model (such as convolutional neural networks) using machine learning method.Wherein, it instructs Practicing sample used in above-mentioned face character identification model may include people in a large amount of facial image and each facial image The face character information of face.In practice, face character information can be made using the facial image in sample as the input of model For the output of model, the model is trained using machine learning method, the model after training is determined as face character Identification model.
It should be noted that face character information mainly includes gender, age, race, expression etc..Above-mentioned generation network The convolutional neural networks for carrying out image procossing be can be (such as comprising convolutional layer, pond layer, anti-pond layer, warp lamination Various convolutional neural networks structures, can successively carry out it is down-sampled and up-sampling);Above-mentioned first differentiates network, above-mentioned second Differentiate that network can be convolutional neural networks (such as various convolutional neural networks structures comprising full articulamentum, wherein above-mentioned complete Classification feature may be implemented in articulamentum) or can be used to implement other model structures of classification feature, such as support vector machines (Support Vector Machine, SVM) etc..It should be noted that the image that above-mentioned generation network is exported can use RGB The matrix of triple channel is expressed.Herein, above-mentioned first differentiation network is if it is determined that the image of input is that above-mentioned generation network institute is defeated Image (carrying out self-generating data) out, then can export 1;If it is determined that the image of input is not the figure that above-mentioned generation network is exported As (coming from truthful data, i.e., above-mentioned second image), then 0 can be exported.Above-mentioned second differentiates network if it is determined that above-mentioned generation net The face character information and the face character for the face being input in the image of above-mentioned generation network of face in the image of network output Information matches, and can export 1;If above-mentioned second differentiation network determines the people of face in the image of above-mentioned generation network output Face attribute information is mismatched with the face character information for being input to face in the image of above-mentioned generation network, can export 0.It needs Illustrate, above-mentioned first differentiation network, the second differentiation network can also export other numerical value based on presetting, and be not limited to 1 With 0.
Second step differentiates net to above-mentioned generation network, above-mentioned first based on above-mentioned training sample using machine learning method Network and above-mentioned second differentiation network are trained, and the above-mentioned generation network after training is determined as image and generates model.Specifically, It can fix first and generate network and differentiate that any network in network (differentiating that network and second differentiates network including first) (can Referred to as first network) parameter, the network (can be described as the second network) of unlocked parameter is optimized;The second network is fixed again Parameter, first network is improved.Above-mentioned iteration is constantly carried out, until the damage that first differentiates network and the second differentiation network Functional value convergence is lost, generation network at this time can be determined as to image and generate model.It should be noted that above-mentioned continuous progress Iteration differentiates that network and second differentiates that the convergent process of loss function value of network is back-propagation process until first.
In some optional implementations of the present embodiment, above-mentioned training sample may include multiple non-frontal uniform The first image that face is shot under light conditions, the second figure that face is shot under the conditions of positive uniform source of light The face character information of face in picture and above-mentioned second image.The first image and second in practice, under same light source environment The position consistency of the consistent and captured object of consistent, the captured object of the shooting angle of image, therefore, in same light source ring The face character information of face is identical as the face character information of face in the second image in the first image under border.It is pre- obtaining After the training sample set and the production pre-established confrontation network, above-mentioned electronic equipment can be instructed by following training step It gets above-mentioned image and generates model:
The first step, the parameter of fixed above-mentioned generation network will using above-mentioned first image as the input of above-mentioned generation network The image of above-mentioned generation network output is input to face character identification model trained in advance, obtains the face category of face to be identified Property information.
Second step differentiates the defeated of network using the image of above-mentioned generation network output, above-mentioned second image as above-mentioned first Enter, using the face character information of face in the face character information of above-mentioned face to be identified and above-mentioned second image as above-mentioned Two differentiate the input of network, are instructed using machine learning method to above-mentioned first differentiation network and above-mentioned second differentiation network Practice.It should be noted that since the image for generating network output is generation data, and known second image is truthful data, Therefore, for being input to the image of the first differentiation network, it can be automatically generated for indicating the image to generate data or true The mark of data.
Third step, above-mentioned first after fixed training differentiates that network and above-mentioned second differentiates the parameter of network, by above-mentioned the Input of one image as above-mentioned generation network, using machine learning method, back-propagation algorithm and gradient descent algorithm to upper Generation network is stated to be trained.In practice, above-mentioned back-propagation algorithm, above-mentioned gradient descent algorithm are to study and answer extensively at present Well-known technique, details are not described herein.
4th step, above-mentioned first after determining training differentiates that network and above-mentioned second differentiates the loss function value of network, rings Above-mentioned generation network should be determined as above-mentioned image and generate model in determining above-mentioned loss function value convergence.
It should be noted that instruction can be used in above-mentioned electronic equipment in response to determining that above-mentioned loss function value does not restrain Above-mentioned generation network, above-mentioned first differentiation network and above-mentioned second differentiation network after white silk re-execute above-mentioned training step.By This, the parameter that the image that production confrontation network training obtains generates model is based not only on training sample and obtains, and is also based on First differentiate network and second differentiate network backpropagation and determination, thus need not rely on the sample for largely having mark The training that generation model can be realized obtains image and generates model, reduces human cost, further improves image procossing Flexibility.
In some optional implementations of the present embodiment, above-mentioned training sample can be generated by following steps:
The first step obtains the three-dimensional face model pre-established.Herein, above-mentioned three-dimensional face model can be technical staff It is pre-established using various existing threedimensional model design tools, and above-mentioned threedimensional model design tool can support setting not The light source of same type renders the three-dimensional face model established, and supports to be become by the projection of threedimensional model to two dimensional image The functions such as change, details are not described herein again.
Second step is respectively set different light source parameters and renders to above-mentioned three-dimensional face model, obtains having difference The first image and the second image of illumination parameter, wherein the light source parameters of above-mentioned first image are non-frontal uniform source of light condition Under parameter, the light source parameters of above-mentioned second image are the parameter under the conditions of positive uniform source of light.It, can be in three-dimensional people in practice Light source is arranged in all angles such as top, bottom, behind, side, the front of face model, and light source can be point light source, area source Etc. various types of light sources.Herein, since threedimensional model design tool projection support converts, three-dimensional mould can directly be utilized Type design tool obtains above-mentioned first image and the second image.And it is possible to which the first image and the second image is arranged relative to upper State three-dimensional face model visual angle having the same.
Above-mentioned second image is input to face character identification model trained in advance, obtains above-mentioned second figure by third step The face character information of face as in.It should be noted that face character identification model used in this step is obtained with above-mentioned The face character information of face in image after processing, in the image of above-mentioned generation network face face character The face character identification model of information and the face character information of face to be identified is the same model;The operation side of this step Method and the face character information of face, the people in the image of above-mentioned generation network in the image after above-mentioned handled The operating method of the face character information of face and the face character information of above-mentioned face to be identified is essentially identical, no longer superfluous herein It states.
4th step, by the face character information of face in above-mentioned first image, above-mentioned second image and above-mentioned second image Form training sample.Establish training sample using three-dimensional face model, compared to directly utilize camera acquire true picture, energy It reaches flexibly and is quickly generated more samples;Also, training sample is established using three-dimensional face model, various angles can be simulated Degree, various types of illumination conditions, make that the data of training sample are richer, coverage area is wider.
Step 203, it will optimize in image input face character identification model trained in advance, obtain face in optimization image Face character information.
In the present embodiment, above-mentioned optimization image can be input to face character trained in advance and known by above-mentioned electronic equipment Other model obtains the face character information of face in above-mentioned optimization image, wherein face character identification model is used for in image The face character of face is identified to obtain face character information.It should be noted that face character used in this step is known Other model and it is above-mentioned handled after image in face face character information, to be input to the image of above-mentioned generation network The face character information of face in the face character information of middle face, the face character information of face to be identified and the second image Face character identification model is the same model;The operating method of this step and face in the image after above-mentioned handled Face character information, in the image of above-mentioned generation network the face character information of face, face to be identified face The operating method of the face character information of face is essentially identical in attribute information, the second image, and details are not described herein.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the face character recognition methods of the present embodiment Figure.In the application scenarios of Fig. 3, the electronic equipment (such as mobile phone) for handling image can first turn on camera, work as (such as backlight) takes pictures to some object (such as face) under the conditions of preceding non-frontal uniform source of light, to get figure to be processed As (as shown in label 301).Then, which can be input to in advance trained image and generate model, obtained pair Above-mentioned image to be processed carries out light optimization image adjusted (as shown in label 302).It should be noted that label 301, mark Image indicated by numbers 302 is only to illustrate.Finally, training in advance being input to by image (as shown in label 302) is optimized Face character identification model, the face character information (as shown in label 303) for obtaining face in the optimization image is male, 30 Year, yellow.
The method provided by the above embodiment of the application, by the way that acquired image to be processed is input to training in advance Image generates model, obtains carrying out light optimization image adjusted to the image to be processed, then inputs the optimization image To face character identification model trained in advance, the face character information of face in the optimization image is obtained, hence for illumination Image captured by (such as situations such as backlight, sidelight) in the case that environment is poor can accurately determine the attribute letter of its face Breath improves the accuracy of face character identification.
With further reference to Fig. 4, as the realization to method shown in above-mentioned each figure, this application provides a kind of knowledges of face character One embodiment of other device, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically apply In various electronic equipments.
As shown in figure 4, the face character identification device 400 of the present embodiment includes: that first acquisition unit 401, first inputs Unit 402 and the second input unit 403.Wherein, first acquisition unit 401 is configured to obtain image to be processed, wherein wait locate Managing image is under the conditions of non-frontal uniform source of light to the image of face shooting;First input unit 402 is configured to will be wait locate Reason image is input to image trained in advance and generates model, obtains carrying out light optimization image adjusted to image to be processed, Wherein, optimization image is the facial image that is presented under the conditions of positive uniform source of light, image generate model for it is non-just Captured image carries out light adjustment to generate the image under the conditions of positive uniform source of light under the conditions of the uniform source of light of face;Second is defeated Enter unit 403 to be configured to optimize in image input face character identification model trained in advance, obtains people in optimization image The face character information of face, wherein face character identification model to the face character of face in image for being identified to obtain Face character information.
In the present embodiment, the first acquisition unit 401 of face character identification device 400, the first input unit 402 and The specific processing of two input units 403 can be with reference to step 201, step 202 and the step 203 in Fig. 2 corresponding embodiment.
In some optional implementations of the present embodiment, above-mentioned face character identification device 400 can also include the Two acquiring unit (not shown)s and training unit (not shown).The first step, above-mentioned second acquisition unit are available Preset training sample and the production pre-established confrontation network.For example, above-mentioned production confrontation network can be depth volume Product generates confrontation network.Wherein, above-mentioned production confrontation network may include generating network, the first differentiation network and the second differentiation Network, above-mentioned generation network can be used for carrying out the image that is inputted the image after light adjustment and output adjustment, and above-mentioned the One differentiation network is determined for whether inputted image is exported by above-mentioned generation network, and above-mentioned second differentiates that network can be with In image for determining the output of above-mentioned generation network the face character information of face whether be input to above-mentioned generation network The face character information of face in image matches.Herein, the face character of face in the image that above-mentioned generation network exports Information is to be input to by the image for exporting above-mentioned generation network obtained in face character identification model trained in advance, on The face character information for stating the face in the image of above-mentioned generation network is the face manually marked out obtained in advance Attribute information.Wherein, above-mentioned face character identification model can be used for identifying the face character of face in image obtaining Face character information, above-mentioned face character identification model can be using machine learning method to existing model (such as convolution Neural network) carry out Training after obtain.Wherein, sample used in the above-mentioned face character identification model of training can be with Face character information including face in a large amount of facial image and each facial image.It, can will be in sample in practice Input of the facial image as model, using face character information as the output of model, using machine learning method to the model It is trained, the model after training is determined as face character identification model.
It should be noted that face character information mainly includes gender, age, race, expression etc..Above-mentioned generation network It can be the convolutional neural networks for carrying out image procossing;Above-mentioned first differentiation network, above-mentioned second differentiation network can be Convolutional neural networks or other model structures that can be used to implement classification feature, such as support vector machines etc..It needs to illustrate , the image that above-mentioned generation network is exported can be expressed with the matrix of RGB triple channel.Herein, above-mentioned first differentiates Network can then export 1 if it is determined that the image of input is image that above-mentioned generation network is exported;If it is determined that the image of input is not It is the image that above-mentioned generation network is exported, then can exports 0.Above-mentioned second differentiates network if it is determined that above-mentioned generation network exports Image in face face character information be input to above-mentioned generation network image in face face character information phase Matching, can export 1;If it is above-mentioned second differentiate network if it is determined that above-mentioned generation network output image in face face character Information is mismatched with the face character information for being input to face in the image of above-mentioned generation network, can export 0.It needs to illustrate It is that above-mentioned first differentiation network, the second differentiation network can also export other numerical value based on presetting, is not limited to 1 and 0.
Second step, above-mentioned training unit can use machine learning method, based on above-mentioned training sample to above-mentioned generation net Network, above-mentioned first differentiation network and above-mentioned second differentiation network are trained, and the above-mentioned generation network after training is determined as figure As generating model.Specifically, above-mentioned training unit can be fixed first generates network and differentiates that any network in network (can claim For first network) parameter, the network (can be described as the second network) of unlocked parameter is optimized;The second network is fixed again Parameter improves first network.Above-mentioned iteration is constantly carried out, until the loss that first differentiates network and the second differentiation network Generation network at this time can be determined as image and generate model by functional value convergence.It should be noted that above-mentioned constantly change In generation, differentiates that network and second differentiates that the convergent process of loss function value of network is back-propagation process until first.
In some optional implementations of the present embodiment, above-mentioned training sample may include multiple non-frontal uniform The first image that face is shot under light conditions, the second figure that face is shot under the conditions of positive uniform source of light The face character information of face in picture and above-mentioned second image.The first image and second in practice, under same light source environment The position consistency of the consistent and captured object of consistent, the captured object of the shooting angle of image, therefore, in same light source ring The face character information of face is identical as the face character information of face in the second image in the first image under border.
In some optional implementations of the present embodiment, in generation acquisition preset training sample and pre-established After formula fights network, above-mentioned training unit can obtain above-mentioned image by the training of following training step and generate model:
The first step, the parameter of fixed above-mentioned generation network will using above-mentioned first image as the input of above-mentioned generation network The image of above-mentioned generation network output is input to face character identification model trained in advance, obtains the face category of face to be identified Property information.
Second step differentiates the defeated of network using the image of above-mentioned generation network output, above-mentioned second image as above-mentioned first Enter, using the face character information of face in the face character information of above-mentioned face to be identified and above-mentioned second image as above-mentioned Two differentiate the input of network, are instructed using machine learning method to above-mentioned first differentiation network and above-mentioned second differentiation network Practice.It should be noted that since the image for generating network output is generation data, and known second image is truthful data, Therefore, for being input to the image of the first differentiation network, it can be automatically generated for indicating the image to generate data or true The mark of data.
Third step, above-mentioned first after fixed training differentiates that network and above-mentioned second differentiates the parameter of network, by above-mentioned the Input of one image as above-mentioned generation network, using machine learning method, back-propagation algorithm and gradient descent algorithm to upper Generation network is stated to be trained.In practice, above-mentioned back-propagation algorithm, above-mentioned gradient descent algorithm are to study and answer extensively at present Well-known technique, details are not described herein.
4th step, above-mentioned first after determining training differentiates that network and above-mentioned second differentiates the loss function value of network, rings Above-mentioned generation network should be determined as above-mentioned image and generate model in determining above-mentioned loss function value convergence.
In some optional implementations of the present embodiment, in response to determining that above-mentioned loss function value does not restrain, on Stating training unit can be used the above-mentioned generation network after training, above-mentioned first differentiation network and above-mentioned second differentiation network again Execute above-mentioned training step.The parameter that the image that production confrontation network training obtains as a result, generates model is based not only on training Sample obtains, be also based on the first differentiation network and second differentiate network backpropagation and determination, because without according to Rely the training for largely having the sample of mark that generation model can be realized to obtain image and generate model, reduces human cost, into One step improves the flexibility of image procossing.
In some optional implementations of the present embodiment, above-mentioned face character identification device 400 can also include the Three acquiring unit (not shown)s, setting unit (not shown), third input unit (not shown) and composition list First (not shown).
The first step, the above-mentioned available three-dimensional face model pre-established of third acquiring unit.Herein, above-mentioned three-dimensional people Face model can be what technical staff was pre-established using various existing threedimensional model design tools, and above-mentioned threedimensional model is set Meter tool can support that different types of light source, which is arranged, renders the three-dimensional face model established, and support by three-dimensional mould Type is to functions such as the projective transformations of two dimensional image, and details are not described herein again.
Second step, above-mentioned setting unit can be respectively set different light source parameters and carry out wash with watercolours to above-mentioned three-dimensional face model Dye, obtain first image with different illumination parameters and the second image, wherein the light source parameters of above-mentioned first image be it is non-just Parameter under the conditions of the uniform source of light of face, the light source parameters of above-mentioned second image are the parameter under the conditions of positive uniform source of light.Practice In, light source can be set in all angles such as the top of three-dimensional face model, bottom, behind, side, fronts, and light source can be with It is various types of light sources such as point light source, area source.It herein, can since threedimensional model design tool projection support converts Directly to obtain above-mentioned first image and the second image using threedimensional model design tool.And it is possible to be arranged the first image and Second image is relative to above-mentioned three-dimensional face model visual angle having the same.
Above-mentioned second image can be input to face character identification trained in advance by third step, above-mentioned third input unit Model obtains the face character information of face in above-mentioned second image.It should be noted that face character used in this step Identification model and it is above-mentioned handled after image in face face character information, to be input to the figure of above-mentioned generation network The face character identification model of the face character information of the face character information and face to be identified of face is same as in Model;The operating method of this step with the face character information of face in the image after above-mentioned handled, to be input to upper State the operation side of the face character information of face in the image for generating network and the face character information of above-mentioned face to be identified Method is essentially identical, and details are not described herein.
4th step, above-mentioned component units can be by people in above-mentioned first image, above-mentioned second image and above-mentioned second image The face character information of face forms training sample.Training sample is established using three-dimensional face model, is imaged compared to directly utilizing Head acquisition true picture, can flexibly and be quickly generated more samples;Also, training sample is established using three-dimensional face model This, can simulate various angles, various types of illumination conditions, make that the data of training sample are richer, coverage area is wider.
The device provided by the above embodiment of the application is obtained first acquisition unit 401 by the first input unit 402 The image to be processed taken is input to image trained in advance and generates model, obtains adjusted to the image progress light to be processed Optimize image, then the optimization image is input to face character identification model trained in advance by the second input unit 403, is obtained The face character information of face in the optimization image, hence for light environment it is poor in the case where (such as feelings such as backlight, sidelight Condition) captured by image, can accurately determine the attribute information of its face, improve face character identification accuracy.
Below with reference to Fig. 5, it illustrates the computer systems 500 for the electronic equipment for being suitable for being used to realize the embodiment of the present application Structural schematic diagram.Electronic equipment shown in Fig. 5 is only an example, function to the embodiment of the present application and should not use model Shroud carrys out any restrictions.
As shown in figure 5, computer system 500 includes central processing unit (CPU) 501, it can be read-only according to being stored in Program in memory (ROM) 502 or be loaded into the program in random access storage device (RAM) 503 from storage section 508 and Execute various movements appropriate and processing.In RAM 503, also it is stored with system 500 and operates required various programs and data. CPU 501, ROM 502 and RAM 503 are connected with each other by bus 504.Input/output (I/O) interface 505 is also connected to always Line 504.
I/O interface 505 is connected to lower component: the importation 506 including touch screen, touch tablet etc.;Including such as liquid The output par, c 507 of crystal display (LCD) etc. and loudspeaker etc.;Storage section 508 including hard disk etc.;And including such as The communications portion 509 of the network interface card of LAN card, modem etc..Communications portion 509 is held via the network of such as internet Row communication process.Driver 510 is also connected to I/O interface 505 as needed.Detachable media 511, such as semiconductor memory Etc., it is mounted on driver 510, is deposited in order to be mounted into as needed from the computer program read thereon as needed Store up part 508.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communications portion 509, and/or from detachable media 511 are mounted.When the computer program is executed by central processing unit (CPU) 501, limited in execution the present processes Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or Computer readable storage medium either the two any combination.Computer readable storage medium for example can be --- but Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination. The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires electrical connection, Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory Part or above-mentioned any appropriate combination.In this application, computer readable storage medium, which can be, any include or stores The tangible medium of program, the program can be commanded execution system, device or device use or in connection.And In the application, computer-readable signal media may include in a base band or the data as the propagation of carrier wave a part are believed Number, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, including but not It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use In by the use of instruction execution system, device or device or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc., Huo Zheshang Any appropriate combination stated.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet Include first acquisition unit, the first input unit and the second input unit.Wherein, the title of these units is not under certain conditions The restriction to the unit itself is constituted, for example, first acquisition unit is also described as " obtaining the unit of image to be processed ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be Included in device described in above-described embodiment;It is also possible to individualism, and without in the supplying device.Above-mentioned calculating Machine readable medium carries one or more program, when said one or multiple programs are executed by the device, so that should Device: image to be processed is obtained, wherein image to be processed is under the conditions of non-frontal uniform source of light to the image of face shooting; Image to be processed is input to image trained in advance and generates model, obtains carrying out light optimization adjusted to image to be processed Image, wherein optimization image is the facial image that is presented under the conditions of positive uniform source of light, image generate model for Captured image carries out light adjustment to generate the image under the conditions of positive uniform source of light under the conditions of non-frontal uniform source of light;It will Optimize in image input face character identification model trained in advance, obtain the face character information of face in optimization image, In, face character identification model is for identifying the face character of face in image to obtain face character information.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (14)

1. a kind of face character recognition methods, comprising:
Obtain image to be processed, wherein the image to be processed is under the conditions of non-frontal uniform source of light to the figure of face shooting Picture;
The image to be processed is input to image trained in advance and generates model, obtains carrying out light to the image to be processed Optimization image adjusted, wherein the optimization image is the facial image that is presented under the conditions of positive uniform source of light, described It is positive equal to generate for carrying out light adjustment to captured image under the conditions of non-frontal uniform source of light that image generates model Image under even light conditions;
By in optimization image input face character identification model trained in advance, the people of face in the optimization image is obtained Face attribute information, wherein the face character identification model to the face character of face in image for being identified to obtain people Face attribute information;
Wherein, training obtains described image generation model as follows:
Obtain preset training sample and the production pre-established confrontation network, wherein the production fights network and includes Generate network, the first differentiation network and second differentiates that network, the network that generates are used to carry out light adjustment to the image of input And the image after output adjustment, described first differentiates network for determining whether inputted image is defeated by the generation network Out, it is described second differentiation network be used for determine it is described generation network output image in face face character information whether with it is defeated The face character information for entering the face into the image for generating network matches;
Using machine learning method, differentiate that network and described second differentiates that network carries out based on the generation network, described first The generation network after training is determined as image and generates model by training.
2. according to the method described in claim 1, wherein, the face character information of face in the image for generating network output It is that the image of the generation network output is input to obtained in face character identification model trained in advance, it is described to be input to The face character information of face obtains in advance in the image for generating network.
3. according to the method described in claim 2, wherein, the training sample includes multiple under the conditions of non-frontal uniform source of light The first image that face is shot, the second image and described that face is shot under the conditions of positive uniform source of light The face character information of face in two images.
4. described to utilize machine learning method according to the method described in claim 3, wherein, based on the generation network, described First differentiation network and the second differentiation network are trained, and the generation network after training is determined as image and generates mould Type, comprising:
Execute following training step: the fixed parameter for generating network, using the first image as the generation network The image of the generation network output is input in face character identification model trained in advance and obtains face to be identified by input Face character information;The defeated of network is differentiated using the image of the generation network output and second image as described first Enter, using the face character information of face in the face character information of the face to be identified and second image as described Two differentiate the input of network, are instructed using machine learning method to the first differentiation network and the second differentiation network Practice;Described first after fixed training differentiates that network and described second differentiates the parameter of network, using the first image as institute State generate network input, using machine learning method, back-propagation method and gradient descent algorithm to the generation network into Row training;Described first after determining training differentiates that network and described second differentiates the loss function value of network, in response to determination The loss function value convergence out, is determined as image for the generation network after training and generates model.
5. described to utilize machine learning method according to the method described in claim 4, wherein, based on the generation network, described First differentiation network and the second differentiation network are trained, and the generation network after training is determined as image and generates mould Type, comprising:
In response to determining that the loss function value does not restrain, the generation network, the first differentiation net after training are used Network and the second differentiation network re-execute the training step.
6. the method according to one of claim 3-5, wherein the training sample generates as follows:
Obtain the three-dimensional face model pre-established;
Different light source parameters are respectively set to render the three-dimensional face model, obtain with different light source parameters One image and the second image, wherein the light source parameters of the first image are the parameter under the conditions of non-frontal uniform source of light, described The light source parameters of second image are the parameter under the conditions of positive uniform source of light;
Second image is input to face character identification model trained in advance, obtains the people of face in second image Face attribute information;
The face character information of face in the first image, second image and second image is formed into training sample This.
7. a kind of face character identification device, comprising:
First acquisition unit is configured to obtain image to be processed, wherein the image to be processed is in non-frontal uniform source of light Under the conditions of to face shooting image;
First input unit is configured to for the image to be processed being input in advance trained image and generates model, obtains pair The image to be processed carries out light optimization image adjusted, wherein the optimization image is in positive uniform source of light condition Lower presented facial image, described image generate model be used for captured image under the conditions of non-frontal uniform source of light into Row light is adjusted to generate the image under the conditions of positive uniform source of light;
Second input unit is configured to obtain in optimization image input face character identification model trained in advance The face character information of face in the optimization image, wherein the face character identification model is used for face in image Face character is identified to obtain face character information;
Wherein, described device further include:
Second acquisition unit is configured to obtain preset training sample and the production pre-established confrontation network, wherein institute Stating production confrontation network includes generating network, the first differentiation network and the second differentiation network, the generation network to be used for defeated The image entered carries out the image after light adjustment and output adjustment, and the first differentiation network is for determining inputted image No to be exported by the generation network, the second differentiation network is used to determine the people of face in the image of the generation network output Whether face attribute information matches with the face character information for being input to face in the image for generating network;
Training unit, is configured to using machine learning method, based on the generations network, the first differentiation network and described Second differentiation network is trained, and the generation network after training is determined as image and generates model.
8. device according to claim 7, wherein the face character information of face in the image for generating network output It is that the image of the generation network output is input to obtained in face character identification model trained in advance, it is described to be input to The face character information of face obtains in advance in the image for generating network.
9. device according to claim 8, wherein the training sample includes multiple under the conditions of non-frontal uniform source of light The first image that face is shot, the second image and described that face is shot under the conditions of positive uniform source of light The face character information of face in two images.
10. device according to claim 9, wherein the training unit is further configured to:
Execute following training step: the fixed parameter for generating network, using the first image as the generation network The image of the generation network output is input in face character identification model trained in advance and obtains face to be identified by input Face character information;The defeated of network is differentiated using the image of the generation network output and second image as described first Enter, using the face character information of face in the face character information of the face to be identified and second image as described Two differentiate the input of network, are instructed using machine learning method to the first differentiation network and the second differentiation network Practice;Described first after fixed training differentiates that network and described second differentiates the parameter of network, using the first image as institute State generate network input, using machine learning method, back-propagation method and gradient descent algorithm to the generation network into Row training;Described first after determining training differentiates that network and described second differentiates the loss function value of network, in response to determination The loss function value convergence out, is determined as image for the generation network after training and generates model.
11. device according to claim 10, wherein the training unit is further configured to:
In response to determining that the loss function value does not restrain, the generation network, the first differentiation net after training are used Network and the second differentiation network re-execute the training step.
12. the device according to one of claim 9-11, wherein described device further include:
Third acquiring unit is configured to obtain the three-dimensional face model pre-established;
Setting unit, is configured to different light source parameters are respectively set and renders to the three-dimensional face model, is had There are the first image and the second image of different light source parameters, wherein the light source parameters of the first image are non-frontal uniform light Parameter under the conditions of source, the light source parameters of second image are the parameter under the conditions of positive uniform source of light;
Third input unit is configured to for second image being input to face character identification model trained in advance, obtains The face character information of face in second image;
Component units are configured to the face category of face in the first image, second image and second image Property information form training sample.
13. a kind of electronic equipment, comprising:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real Now such as method as claimed in any one of claims 1 to 6.
14. a kind of computer readable storage medium, is stored thereon with computer program, wherein when the program is executed by processor Realize such as method as claimed in any one of claims 1 to 6.
CN201810044637.9A 2018-01-17 2018-01-17 Face character recognition methods and device Active CN108133201B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810044637.9A CN108133201B (en) 2018-01-17 2018-01-17 Face character recognition methods and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810044637.9A CN108133201B (en) 2018-01-17 2018-01-17 Face character recognition methods and device

Publications (2)

Publication Number Publication Date
CN108133201A CN108133201A (en) 2018-06-08
CN108133201B true CN108133201B (en) 2019-10-25

Family

ID=62399979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810044637.9A Active CN108133201B (en) 2018-01-17 2018-01-17 Face character recognition methods and device

Country Status (1)

Country Link
CN (1) CN108133201B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280413A (en) * 2018-01-17 2018-07-13 百度在线网络技术(北京)有限公司 Face identification method and device

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364029A (en) * 2018-03-19 2018-08-03 百度在线网络技术(北京)有限公司 Method and apparatus for generating model
CN108921792B (en) * 2018-07-03 2023-06-27 北京字节跳动网络技术有限公司 Method and device for processing pictures
CN109102029B (en) * 2018-08-23 2023-04-07 重庆科技学院 Method for evaluating quality of synthesized face sample by using information maximization generation confrontation network model
WO2020037680A1 (en) * 2018-08-24 2020-02-27 太平洋未来科技(深圳)有限公司 Light-based three-dimensional face optimization method and apparatus, and electronic device
CN109377535A (en) * 2018-10-24 2019-02-22 电子科技大学 Facial attribute automatic edition system, method, storage medium and terminal
CN109754447B (en) 2018-12-28 2021-06-22 上海联影智能医疗科技有限公司 Image generation method, device, equipment and storage medium
CN110009059B (en) * 2019-04-16 2022-03-29 北京字节跳动网络技术有限公司 Method and apparatus for generating a model
CN110443244B (en) * 2019-08-12 2023-12-05 深圳市捷顺科技实业股份有限公司 Graphics processing method and related device
CN111369468B (en) * 2020-03-09 2022-02-01 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN111369429B (en) * 2020-03-09 2021-04-30 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN111524216B (en) * 2020-04-10 2023-06-27 北京百度网讯科技有限公司 Method and device for generating three-dimensional face data

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491771A (en) * 2017-09-21 2017-12-19 百度在线网络技术(北京)有限公司 Method for detecting human face and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101385599B1 (en) * 2012-09-26 2014-04-16 한국과학기술연구원 Method and apparatus for interfering montage
CN107423701B (en) * 2017-07-17 2020-09-01 智慧眼科技股份有限公司 Face unsupervised feature learning method and device based on generative confrontation network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491771A (en) * 2017-09-21 2017-12-19 百度在线网络技术(北京)有限公司 Method for detecting human face and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Image De-raining Using a Conditional Generative Adversarial Network;He Zhang et al;《Eprint ArXiv:1701.05957》;20170131;1-12 *
基于生成式对抗网络的人脸识别开发;张卫等;《电子世界》;20171031;第2017年卷(第20期);164-165 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280413A (en) * 2018-01-17 2018-07-13 百度在线网络技术(北京)有限公司 Face identification method and device
CN108280413B (en) * 2018-01-17 2022-04-19 百度在线网络技术(北京)有限公司 Face recognition method and device

Also Published As

Publication number Publication date
CN108133201A (en) 2018-06-08

Similar Documents

Publication Publication Date Title
CN108133201B (en) Face character recognition methods and device
CN108154547B (en) Image generating method and device
CN108171206B (en) Information generating method and device
CN108280413A (en) Face identification method and device
US11487995B2 (en) Method and apparatus for determining image quality
CN109214343B (en) Method and device for generating face key point detection model
CN108898185A (en) Method and apparatus for generating image recognition model
US20200193591A1 (en) Methods and systems for generating 3d datasets to train deep learning networks for measurements estimation
CN109409994A (en) The methods, devices and systems of analog subscriber garments worn ornaments
CN110503703A (en) Method and apparatus for generating image
CN107491771A (en) Method for detecting human face and device
CN108363995A (en) Method and apparatus for generating data
CN108491809A (en) The method and apparatus for generating model for generating near-infrared image
CN108388878A (en) The method and apparatus of face for identification
CN108509892A (en) Method and apparatus for generating near-infrared image
CN109344752A (en) Method and apparatus for handling mouth image
CN108985257A (en) Method and apparatus for generating information
CN108416326A (en) Face identification method and device
CN109410253B (en) For generating method, apparatus, electronic equipment and the computer-readable medium of information
CN109101919A (en) Method and apparatus for generating information
CN108510454A (en) Method and apparatus for generating depth image
CN109241934A (en) Method and apparatus for generating information
CN108460366A (en) Identity identifying method and device
CN109697749A (en) A kind of method and apparatus for three-dimensional modeling
CN108960110A (en) Method and apparatus for generating information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant