CN108229296A - The recognition methods of face skin attribute and device, electronic equipment, storage medium - Google Patents

The recognition methods of face skin attribute and device, electronic equipment, storage medium Download PDF

Info

Publication number
CN108229296A
CN108229296A CN201710927454.7A CN201710927454A CN108229296A CN 108229296 A CN108229296 A CN 108229296A CN 201710927454 A CN201710927454 A CN 201710927454A CN 108229296 A CN108229296 A CN 108229296A
Authority
CN
China
Prior art keywords
feature
convolutional layer
neural network
face
skin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710927454.7A
Other languages
Chinese (zh)
Other versions
CN108229296B (en
Inventor
罗思伟
张展鹏
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN201710927454.7A priority Critical patent/CN108229296B/en
Publication of CN108229296A publication Critical patent/CN108229296A/en
Application granted granted Critical
Publication of CN108229296B publication Critical patent/CN108229296B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The embodiment of the invention discloses a kind of face skin attribute recognition methods and device, electronic equipment, computer storage media, including:Feature extraction is carried out to the facial image in images to be recognized by each convolutional layer in neural network;The feature of shallower convolutional layer extraction at least one in the neural network with the last one convolutional layer extraction feature is merged, obtains fusion feature;The prediction of skin attribute is carried out to the facial image based on the fusion feature, obtains the prediction label of skin attribute.The above embodiment of the present invention is merged by the feature extracted to shallower convolutional layer at least one in neural network with the feature that the last one convolutional layer extracts, and is obtained the shallow-layer feature and further feature of neural network, is realized comprehensive judgement to skin attribute;The prediction of skin attribute is carried out to facial image based on fusion feature;Realize the prediction to the different skin attribute of face skin.

Description

The recognition methods of face skin attribute and device, electronic equipment, storage medium
Technical field
The present invention relates to technical field of computer vision, especially a kind of face character recognition methods and device, electronics are set Standby, computer storage media.
Background technology
In recent years, deep learning all achieves good achievement in the every field of image procossing, such as:Picture classification, Picture segmentation etc..The identification of face skin attribute has very important significance in some entertainment applications such as mobile phone U.S. face, net cast, Face skin attribute generally comprises but is not limited to the skin quality of face and colour of skin attribute.According to skin attribute, U.S. face application can be certainly The degree of the dynamic U.S. face of selection, to avoid U.S. face degree is insufficient or excessive U.S. face happens.
Invention content
The embodiment of the present invention provides a kind of technical solution for the identification of face skin attribute.
A kind of face skin attribute recognition methods provided in an embodiment of the present invention, including:
Feature extraction is carried out to the facial image in images to be recognized by each convolutional layer in neural network;
Feature is extracted with the last one convolutional layer to the feature of shallower convolutional layer extraction at least one in the neural network It is merged, obtains fusion feature;The shallower convolutional layer be the neural network in addition to the last one described convolutional layer Other convolutional layers;
The prediction of skin attribute is carried out to the facial image based on the fusion feature, obtains the pre- mark of skin attribute Label.
In another embodiment based on the above method of the present invention, to shallower convolution at least one in the neural network The feature of layer extraction is merged with the last one convolutional layer extraction feature, obtains fusion feature, including:
At least one feature is exported based on each shallower convolutional layer of the neural network, the shallower convolutional layer output will be based on At least one feature carry out change of scale, obtain the spy identical with the characteristic dimension size that the last one convolutional layer exports Sign;
The identical each feature of scale size is stacked and obtains fusion feature.
It is described by the institute based on the shallower convolutional layer output in another embodiment based on the above method of the present invention It states at least one feature and carries out change of scale, obtain the feature identical with the characteristic dimension size that the last one convolutional layer exports, Including:
Pondization will be carried out based at least one feature of the shallower convolutional layer output to operate, obtain and the last one The identical feature of characteristic dimension size of convolutional layer output.
It is described by the institute based on the shallower convolutional layer output in another embodiment based on the above method of the present invention It states at least one feature and carries out pondization operation, including:
Tactful according to average pondization and the alternate pondization of maximum pondization, the feature extracted to each shallower convolutional layer is successively Carry out pondization operation.
In another embodiment based on the above method of the present invention, each feature stacking that scale size is identical obtains Fusion feature is obtained, including:
The identical each feature of the scale size by axis of channel is stacked gradually together, obtains fusion feature, it is described The dimension of fusion feature corresponds to the sum of each described convolutional layer output channel.
It is described to be based on the fusion feature to the face figure in another embodiment based on the above method of the present invention Prediction as carrying out skin attribute, including:
By the full articulamentum in neural network, skin attribute is carried out to the facial image based on the fusion feature Prediction;
Before the prediction for carrying out skin attribute to the facial image based on the fusion feature, further include:
Dimensionality reduction is carried out to the fusion feature by a dimensionality reduction convolutional layer.
In another embodiment based on the above method of the present invention, the convolution kernel number in the dimensionality reduction convolutional layer is less than Preset value, the size of the convolution kernel is 1;
Dimensionality reduction is carried out to the fusion feature by dimensionality reduction convolutional layer, including:Preset value is less than based on convolution kernel number Dimensionality reduction convolutional layer performs convolution operation to the fusion feature, obtains the fusion feature figure that dimension is the convolution kernel number.
In another embodiment based on the above method of the present invention, each convolutional layer by neural network is treated Before identifying that the facial image in image carries out feature extraction, further include:
Face datection is carried out to the images to be recognized, obtains the facial image, and carried from the images to be recognized Take the facial image.
In another embodiment based on the above method of the present invention, Face datection, packet are carried out to the images to be recognized It includes:
It is obtained described at least one face location feature and correspondence from the images to be recognized using Face datection network The face confidence threshold value of images to be recognized, the face location feature include face location rectangle frame and face confidence level;
Face location feature based on the acquisition obtains the face that face confidence level is more than the face confidence threshold value Position rectangle frame;
Face critical point detection is performed to the face location rectangle frame based on face key spot net, it is crucial to obtain face Point obtains facial image based on the face key point from the face location figure.
In another embodiment based on the above method of the present invention, the face location feature further includes facial angle;
Before performing face critical point detection to the face location rectangle frame based on face key spot net, further include: Face location rectangle frame is adjusted based on the facial angle, obtains the face location rectangle frame of positive placement.
In another embodiment based on the above method of the present invention, the prediction label of the skin attribute includes following It anticipates one or more:
Skin quality, skin color, skin brightness.
In another embodiment based on the above method of the present invention, further include:
Based on the prediction label of the skin attribute, landscaping treatment operation is carried out to the facial image.
In another embodiment based on the above method of the present invention, further include:
Images to be recognized is set to sample image, the neural network is trained:
Feature extraction is carried out to the facial image in sample image by each convolutional layer in neural network;The sample graph As being labeled at least one known mark label;
Feature is extracted with the last one convolutional layer to the feature of shallower convolutional layer extraction at least one in the neural network It is merged, obtains fusion feature;The shallower convolutional layer be the neural network in addition to the last one described convolutional layer Other convolutional layers;
The prediction of skin attribute is carried out to the facial image based on the fusion feature, obtains the pre- mark of skin attribute Label;
Prediction label and the known mark label training neural network based on the acquisition.
In another embodiment based on the above method of the present invention, prediction label and known mark based on the acquisition The label training neural network, including:
Error amount is calculated by loss function in prediction label and known mark label based on the acquisition;
The parameter in each convolutional layer of the neural network is updated by reversed gradient algorithm based on error amount.
In another embodiment based on the above method of the present invention, further include:
Using the neural network after undated parameter as the neural network, each convolutional layer pair being iterating through in neural network Facial image in sample image carries out feature extraction;The sample image is labeled at least one known mark label;To institute The feature for stating at least one shallower convolutional layer extraction in neural network is merged with the last one convolutional layer extraction feature, is obtained Fusion feature;The prediction of skin attribute is carried out to the facial image based on the fusion feature, obtains the prediction of skin attribute Label;Error amount is calculated by loss function in prediction label and known mark label based on the acquisition;Based on error Value is updated the parameter in each convolutional layer of the neural network by reversed gradient algorithm;Until the neural network expires Sufficient preset condition.
In another embodiment based on the above method of the present invention, the preset condition includes following any one:
The loss function convergence, the iterations reach preset times and the error amount is less than preset value.
In another embodiment based on the above method of the present invention, prediction label and known mark based on the acquisition The label training neural network, including:
Error amount is calculated by loss function in prediction label and known mark label based on the acquisition;
The error amount is directly opposite at least one shallower volume for traveling in the neural network and obtaining fusion feature In lamination and the last one convolutional layer, error amount is propagated backward to each convolution of the neural network by reversed gradient algorithm Layer;
The parameter in each convolutional layer is updated based on the error amount traveled in each convolutional layer.
In another embodiment based on the above method of the present invention, further include:
Using the neural network after undated parameter as the neural network, each convolutional layer pair being iterating through in neural network Facial image in sample image carries out feature extraction;The sample image is labeled at least one known mark label;To institute The feature for stating at least one shallower convolutional layer extraction in neural network is merged with the last one convolutional layer extraction feature, is obtained Fusion feature;The prediction of skin attribute is carried out to the facial image based on the fusion feature, obtains the prediction of skin attribute Label;Error amount is calculated by loss function in prediction label and known mark label based on the acquisition;By the mistake Difference is directly opposite at least one shallower convolutional layer and the last one volume for traveling in the neural network and obtaining fusion feature In lamination, error amount is propagated backward to each convolutional layer of the neural network by reversed gradient algorithm;Based on traveling to The error amount stated in each convolutional layer is updated the parameter in each convolutional layer;Until the neural network meets default item Part.
In another embodiment based on the above method of the present invention, the preset condition includes following any one:
The loss function convergence, the iterations reach preset times and the error amount is less than preset value.
One side according to embodiments of the present invention, a kind of face skin attribute identification device provided, including:
Feature extraction unit carries out the facial image in images to be recognized for passing through each convolutional layer in neural network Feature extraction;
Fusion Features unit, for the feature to shallower convolutional layer extraction at least one in the neural network and last The feature of a convolutional layer extraction is merged, and obtains fusion feature;The shallower convolutional layer is except described in the neural network Other convolutional layers except the last one convolutional layer;
Attribute forecast unit for being carried out the prediction of skin attribute to the facial image based on the fusion feature, is obtained Obtain the prediction label of skin attribute.
In another embodiment based on above device of the present invention, the Fusion Features unit, including:
Change of scale module exports at least one feature for being based on each shallower convolutional layer of the neural network, will be based on At least one feature of the shallower convolutional layer output carries out change of scale, obtains the spy exported with the last one convolutional layer Levy the identical feature of scale size;
Feature stack module obtains fusion feature for the identical each feature of scale size to be stacked.
In another embodiment based on above device of the present invention, the change of scale module, specifically for that will be based on At least one feature of the shallower convolutional layer output carries out pondization operation, obtains the spy exported with the last one convolutional layer Levy the identical feature of scale size.
In another embodiment based on above device of the present invention, the pondization operation that the change of scale module carries out is wrapped It includes:
Tactful according to average pondization and the alternate pondization of maximum pondization, the feature extracted to each shallower convolutional layer is successively Carry out pondization operation.
In another embodiment based on above device of the present invention, the feature stack module, specifically for by described in The identical each feature of scale size stacks gradually together by axis of channel, obtains fusion feature, the dimension of the fusion feature The sum of corresponding each described convolutional layer output channel.
In another embodiment based on above device of the present invention, the attribute forecast unit, specifically for passing through god Through the full articulamentum in network, the prediction of skin attribute is carried out to the facial image based on the fusion feature;
The face skin attribute identification device, further includes:
Dimensionality reduction unit carries out dimensionality reduction for passing through a dimensionality reduction convolutional layer to the fusion feature.
In another embodiment based on above device of the present invention, the convolution kernel number in the dimensionality reduction convolutional layer is less than Preset value, the size of the convolution kernel is 1;
The dimensionality reduction unit, specifically for being less than the dimensionality reduction convolutional layer of preset value based on convolution kernel number to the fusion spy Sign performs convolution operation, obtains the fusion feature figure that dimension is the convolution kernel number.
In another embodiment based on above device of the present invention, further include:
Face identification unit for carrying out Face datection to the images to be recognized, obtains the facial image, and from institute It states and the facial image is extracted in images to be recognized.
In another embodiment based on above device of the present invention, the face identification unit, including:
Position acquisition module, for obtaining at least one face position from the images to be recognized using Face datection network The face confidence threshold value of feature and the corresponding images to be recognized is put, the face location feature includes face location rectangle frame With face confidence level;
Position determination module obtains face confidence level for the face location feature based on the acquisition and is more than the face The face location rectangle frame of confidence threshold value;
Face acquisition module performs face key point for being based on face key spot net to the face location rectangle frame Detection obtains face key point, facial image is obtained from the face location figure based on the face key point.
In another embodiment based on above device of the present invention, the face location feature further includes facial angle;
The face skin attribute identification device, further includes:Angle adjusting, based on the facial angle to face position It puts rectangle frame to be adjusted, obtains the face location rectangle frame of positive placement.
In another embodiment based on above device of the present invention, the prediction label of the skin attribute includes following It anticipates one or more:
Skin quality, skin color, skin brightness.
In another embodiment based on above device of the present invention, further include:
Beautify unit, for the prediction label based on the skin attribute, landscaping treatment behaviour is carried out to the facial image Make.
In another embodiment based on above device of the present invention, further include:
Sample predictions unit, for images to be recognized to be set to sample image, feature based extraction unit, Fusion Features list The prediction label of the skin attribute of the member sample image corresponding with the acquisition of attribute forecast unit;The sample image be labeled with to Label is marked known to one few;
Network training unit, for the prediction label based on the acquisition and the known mark label training nerve net Network.
In another embodiment based on above device of the present invention, the network training unit, including:
Error calculating module is calculated for the prediction label based on the acquisition and known mark label by loss function Obtain error amount;
Parameter update module, for based on error amount by reversed gradient algorithm in each convolutional layer of the neural network Parameter be updated.
In another embodiment based on above device of the present invention, the network training unit further includes:
Iteration update module, for using the neural network after undated parameter as the neural network, iteration feature based Extraction unit, Fusion Features unit and attribute forecast unit obtain the prediction label of the skin attribute of the corresponding sample image; The parameter in each convolutional layer is updated based on error calculating module and parameter update module;Until the neural network meets Preset condition.
In another embodiment based on above device of the present invention, the preset condition includes following any one:
The loss function convergence, the iterations reach preset times and the error amount is less than preset value.
In another embodiment based on above device of the present invention, the network training unit, including:
Error calculating module is calculated for the prediction label based on the acquisition and known mark label by loss function Obtain error amount;
Error propagation module obtains fusion feature for the error amount to be directly opposite to travel in the neural network At least one shallower convolutional layer and the last one convolutional layer in, error amount is propagated backward to by reversed gradient algorithm described Each convolutional layer of neural network;
Parameter update module, for based on the error amount traveled in each convolutional layer to the ginseng in each convolutional layer Number is updated.
In another embodiment based on above device of the present invention, the network training unit further includes:
Iteration update module, using the neural network after undated parameter as the neural network, the extraction of iteration feature based Unit, Fusion Features unit and attribute forecast unit obtain the prediction label of the skin attribute of the corresponding sample image;It is based on Error calculating module, error propagation module and parameter update module are updated the parameter in each convolutional layer;Until the god Meet preset condition through network.
In another embodiment based on above device of the present invention, the preset condition includes following any one:
The loss function convergence, the iterations reach preset times and the error amount is less than preset value.
One side according to embodiments of the present invention, a kind of electronic equipment provided, including processor, the processor packet Include face skin attribute identification device as described above.
One side according to embodiments of the present invention, a kind of electronic equipment provided, including:Memory, can for storing Execute instruction;
And processor, for communicating to perform the executable instruction so as to complete people as described above with the memory The operation of face skin attribute recognition approach.
A kind of one side according to embodiments of the present invention, the computer storage media provided, can for storing computer The instruction of reading, described instruction are performed the operation for performing face skin attribute recognition methods as described above.
A kind of face skin attribute recognition methods provided based on the above embodiment of the present invention and device, electronic equipment, meter Calculation machine storage medium carries out feature extraction by each convolutional layer in neural network to the facial image in images to be recognized;It is right The feature of at least one shallower convolutional layer extraction is merged with the feature that the last one convolutional layer extracts in neural network, is obtained Fusion feature;The shallow-layer feature and further feature of neural network can be obtained by fusion feature, is extracted using shallower convolutional layer The feature of feature and the last one convolutional layer extraction realize comprehensive judgement to skin attribute;It overcomes and only obtains in the prior art Some loss in detail caused by the feature of the last one convolutional layer, skin attribute is caused to judge the problem of inaccurate;It is based on Fusion feature carries out facial image the prediction of skin attribute;Realize the prediction to the different skin attribute of face skin.
Below by drawings and examples, technical scheme of the present invention is described in further detail.
Description of the drawings
The attached drawing of a part for constitution instruction describes the embodiment of the present invention, and is used to explain together with description The principle of the present invention.
With reference to attached drawing, according to following detailed description, the present invention can be more clearly understood, wherein:
Fig. 1 is the flow chart of the present inventor's face skin attribute recognition approach one embodiment.
Fig. 2 is the structural representation of a specific example of the present inventor's face skin attribute recognition approach the various embodiments described above Figure.
Fig. 3 is the structure diagram of the present inventor's face skin property recognition means one embodiment.
Fig. 4 is the structure diagram for realizing the terminal device of the embodiment of the present application or the electronic equipment of server.
Specific embodiment
Carry out the various exemplary embodiments of detailed description of the present invention now with reference to attached drawing.It should be noted that:Unless in addition have Body illustrates that the unlimited system of component and the positioned opposite of step, numerical expression and the numerical value otherwise illustrated in these embodiments is originally The range of invention.
Simultaneously, it should be appreciated that for ease of description, the size of the various pieces shown in attached drawing is not according to reality Proportionate relationship draw.
It is illustrative to the description only actually of at least one exemplary embodiment below, is never used as to the present invention And its application or any restrictions that use.
Technology, method and apparatus known to person of ordinary skill in the relevant may be not discussed in detail, but suitable In the case of, the technology, method and apparatus should be considered as part of specification.
It should be noted that:Similar label and letter represents similar terms in following attached drawing, therefore, once a certain Xiang Yi It is defined in a attached drawing, then in subsequent attached drawing does not need to that it is further discussed.
The embodiment of the present invention can be applied to computer system/server, can be with numerous other general or specialized calculating System environments or configuration operate together.Suitable for be used together with computer system/server well-known computing system, ring The example of border and/or configuration includes but not limited to:Personal computer system, server computer system, thin client, thick client Machine, hand-held or laptop devices, the system based on microprocessor, set-top box, programmable consumer electronics, NetPC Network PC, Minicomputer system, large computer system and distributed cloud computing technology environment including any of the above described system, etc..
Computer system/server can be in computer system executable instruction (such as journey performed by computer system Sequence module) general linguistic context under describe.In general, program module can include routine, program, target program, component, logic, number According to structure etc., they perform specific task or realize specific abstract data type.Computer system/server can be with Implement in distributed cloud computing environment, in distributed cloud computing environment, task is long-range by what is be linked through a communication network Manage what equipment performed.In distributed cloud computing environment, program module can be located at the Local or Remote meter for including storage device It calculates in system storage medium.
In the prior art, the identification process of face skin attribute is included:To the pretreatment of the picture of acquisition (including picture The normalization of size and gray scale, head pose correction, image segmentation) after, by feature extraction (including:Geometric properties, statistics Feature, frequency characteristic of field and motion feature etc.), finally classify.However, the identification process needs to design manual feature and spy The selection of sign, and different illumination and facial angle are rotated, it needs to carry out different characteristic Designs, that is to say, that The robustness of this method is to be improved, it is difficult to apply in practice.It, can typically, for the identification of face skin attribute By establishing a neural network model, based on training data, a skin quality model and complexion model is respectively trained, for one Given test pictures are opened, face part is plucked out by the methods of Face datection and alignment, places into the colour of skin or skin quality nerve The identification of face skin attribute is carried out in network.But the model being respectively trained can lead to the volume of model with attribute type Increase and linearly increase.
Fig. 1 is the flow chart of the present inventor's face skin attribute recognition approach one embodiment.As shown in Figure 1, the embodiment Method includes:
Step 101, feature extraction is carried out to the facial image in images to be recognized by each convolutional layer in neural network.
Specifically, each convolutional layer in neural network carries out feature extraction to the facial image in images to be recognized successively.
Step 102, the feature of shallower convolutional layer extraction at least one in neural network is extracted with the last one convolutional layer Feature merged, obtain fusion feature.
Wherein, shallower convolutional layer is other convolutional layers in neural network in addition to the last one convolutional layer;Shallower convolution The feature of layer extraction can embody the skin quality problem of face skin (such as:Small pox, scar), overcome nerve net in the prior art Network only utilize the last one convolutional layer output feature, caused by can only identification division skin attribute the problem of;By multilayer spy Sign fusion, and using the judgement of fusion feature progress face skin attribute, get a promotion to the recognition effect of skin attribute.
Step 103, the prediction of skin attribute is carried out to facial image based on fusion feature, obtains the pre- mark of skin attribute Label.
Based on a kind of face skin attribute recognition methods that the above embodiment of the present invention provides, by each in neural network Convolutional layer carries out feature extraction to the facial image in images to be recognized;To shallower convolutional layer extraction at least one in neural network Feature merged with the feature that the last one convolutional layer extracts, obtain fusion feature;God can be obtained by fusion feature Shallow-layer feature and further feature through network utilize the feature of shallower convolutional layer extraction and the feature of the last one convolutional layer extraction Comprehensive judgement to skin attribute is realized, by sharing multiple intermediate layer parameters in neural network so that often increase a volume During outer Attribute Recognition task, it is only necessary to finally add several layers for the understanding in neural network, number of parameters will not be caused Rise at double;Some loss in detail caused by the feature for only obtaining the last one convolutional layer in the prior art are overcome, are caused Skin attribute judges the problem of inaccurate;The prediction of skin attribute is carried out to facial image based on fusion feature;It realizes to face The prediction of the different skin attribute of skin.
In a specific example of the present inventor's face skin attribute recognition approach above-described embodiment, operation 102 includes:
At least one feature is exported based on each shallower convolutional layer of neural network, by the institute based on the shallower convolutional layer output It states at least one feature and carries out change of scale, obtain the feature identical with the characteristic dimension size that the last one convolutional layer exports;
The identical each feature of scale size is stacked and obtains fusion feature.
In the present embodiment, before Fusion Features are carried out, the Feature Conversion merged will be needed first for the last one volume The characteristic dimension size of lamination output is characterized fusion and provides fusion basis;A usual convolutional layer passes through at least one logical Road exports at least one feature, and each channel exports a feature, and the characteristic dimension size of each convolutional layer output is identical, stacks During be to be converted into the identical feature of scale size to be stacked according to channel.
In a specific example of the present inventor's face skin attribute recognition approach the various embodiments described above, shallower volume will be based on At least one feature of lamination output carries out change of scale, obtains identical with the characteristic dimension size that the last one convolutional layer exports Feature, including:
At least one feature exported based on shallower convolutional layer is subjected to pondization operation, is obtained defeated with the last one convolutional layer The identical feature of the characteristic dimension size that goes out.
In the present embodiment, in order to which the size for needing the feature of the Fusion Features exported with the last one convolutional layer is turned It changes, can be operated by pondization (including:Maximum pond and/or average pond etc.) by Feature Conversion it is required size.
In a specific example of the present inventor's face skin attribute recognition approach the various embodiments described above, shallower volume will be based on At least one feature of lamination output carries out pondization operation, including:It is tactful according to average pondization and the maximum alternate pondization of pondization, Pondization operation is carried out successively to the feature of each shallower convolutional layer extraction.
Fig. 2 is the structural representation of a specific example of the present inventor's face skin attribute recognition approach the various embodiments described above Figure.As shown in Fig. 2, carrying out average pondization and maximum pondization operation to one straight line of characteristic pattern, average pond is carried out to characteristic pattern two It is operated with maximum pondization, maximum pondization operation is carried out to characteristic pattern three, maximum pondization operation is carried out to characteristic pattern four;Pass through these Pondization operation makes the size of all characteristic patterns be adjusted to identical size, select the operation of which kind of pondization or sequence that pondization operates not by Limitation, in the present embodiment signified average pondization and maximum pondization alternately refer to needs using averagely pondization and maximum pondization jointly into When row pondization operates using the two alternately.
In a specific example of the present inventor's face skin attribute recognition approach the various embodiments described above, by scale size phase Same each feature, which stacks, obtains fusion feature, including:
The identical each feature of scale size by axis of channel is stacked gradually together, obtains fusion feature, fusion feature Dimension correspond to the sum of each convolutional layer output channel.
Each convolutional layer includes at least one channel, and each channel exports a feature, by each feature using channel as Uranium pile stacks the fusion feature of acquisition, and dimension is by the sum of port number exported equal to all convolutional layers.
In another embodiment of the present inventor's face skin attribute recognition approach, on the basis of the various embodiments described above, operation 103 include:
By the full articulamentum in neural network, the prediction of skin attribute is carried out to facial image based on fusion feature;
Before the prediction for carrying out skin attribute to facial image based on fusion feature, further include:
Dimensionality reduction is carried out to fusion feature by a dimensionality reduction convolutional layer.
In the present embodiment, since the dimension of fusion feature corresponds to the sum of port number of at least two convolutional layers output, because This, the dimension of fusion feature is larger, and for the ease of subsequently identifying, dimensionality reduction is carried out to fusion feature by the dimensionality reduction convolutional layer.
In a specific example of the present inventor's face skin attribute recognition approach the various embodiments described above, in dimensionality reduction convolutional layer Convolution kernel number be less than preset value, the size of convolution kernel is 1;
Dimensionality reduction is carried out to fusion feature by dimensionality reduction convolutional layer, including:It is less than the dimensionality reduction of preset value based on convolution kernel number Convolutional layer performs convolution operation to fusion feature, obtains the fusion feature figure that dimension is convolution kernel number.
In the present embodiment, merged after limiting dimensionality reduction by the convolution kernel number and convolution kernel size that limit dimensionality reduction convolutional layer The dimension of characteristic pattern can realize that dimensionality reduction is determined by the feature of convolution operation by the convolutional layer that convolution kernel is 1.
Another embodiment of the present inventor's face skin attribute recognition approach, on the basis of the various embodiments described above, operation Before 101, further include:
Face datection is carried out to images to be recognized, obtains facial image, and facial image is extracted from images to be recognized.
In the present embodiment, carried out due to the present inventor's face skin attribute recognition approach primarily directed to face skin, institute To extract facial image firstly the need of from images to be recognized, the prior art can be passed through by extracting the process of facial image In neural network carry out Face datection, facial image is extracted based on testing result.
In a specific example of the present inventor's face skin attribute recognition approach above-described embodiment, to images to be recognized into Row Face datection, including:
At least one face location feature and corresponding figure to be identified are obtained from images to be recognized using Face datection network The face confidence threshold value of picture, face location feature include face location rectangle frame and face confidence level;
Face location feature based on acquisition obtains the face location rectangle that face confidence level is more than face confidence threshold value Frame;
Face critical point detection is performed to face location rectangle frame based on face key spot net, obtains face key point, Facial image is obtained from face location figure based on face key point.
A kind of detailed process for obtaining facial image is present embodiments provided, in this embodiment, first from figure to be identified A face position feature and face confidence threshold value are obtained as in, each images to be recognized includes a face confidence threshold value With at least one face location feature, it can determine whether that who face position rectangle frame includes people based on the face confidence threshold value Face, these rectangle frames for including face are face location rectangle frame, then face key point is added in face location rectangle frame, Obtain facial image.
In a specific example of the present inventor's face skin attribute recognition approach above-described embodiment, face location feature is also Including facial angle;
Before performing face critical point detection to face location rectangle frame based on face key spot net, further include:It is based on Facial angle is adjusted face location rectangle frame, obtains the face location rectangle frame of positive placement.
Signified facial angle refers to the angle of inclination of the level angle face location rectangle frame for image in the present embodiment, It does not imply that the angle of inclination of face in facial image, is adjusted by the angle to face location rectangle frame, obtained face position It is positive placement to put face in rectangle frame, in order to subsequently extract and handle to facial image.
In a specific example of the present inventor's face skin attribute recognition approach above-described embodiment, the prediction of skin attribute Label includes following any one or more:
Skin quality, skin color, skin brightness.
The feature that multilayer convolutional layer obtains through this embodiment, can be identified skin attribute from multi-angle, can be with Including:Skin quality, skin color and skin brightness etc. can have face skin based on these skin attributes more comprehensive Understand, in order to subsequently more accurately be operated to face skin.
In a specific example of the present inventor's face skin attribute recognition approach above-described embodiment, further include:
Prediction label based on skin attribute carries out landscaping treatment operation to facial image.
In the present embodiment, by skin attributes such as the skin quality quality of face skin identified and skin brightnesss, according to calculation Method again carries out the image of user U.S. of resonable degree to the evaluation of this user's skin quality and the illumination estimate of the colour of skin, U.S. face algorithm Face operates, and is promoted including skin quality such as mill skin, whitenings;Such as:The small pox more on the face of user, skin quality is poor, and skin is more black, then When U.S. face program carries out U.S. face operation, stronger mill skin and whitening can be carried out to skin;And if the skin quality of user is original Just very well, people is also white, then the degree that U.S. face program carries out the whitening of the mill skin and skin of skin quality will in this way be obtained than relatively low U.S. face result can seem more true.
The a still further embodiment of the present inventor's face skin attribute recognition approach, on the basis of the various embodiments described above, is also wrapped It includes:
Images to be recognized is set to sample image, neural network is trained:
Feature extraction is carried out to the facial image in sample image by each convolutional layer in neural network;Sample image mark It is marked at least one known mark label;
The feature of shallower convolutional layer extraction at least one in neural network is carried out with the last one convolutional layer extraction feature Fusion obtains fusion feature;Shallower convolutional layer is other convolutional layers in addition to the last one convolutional layer in neural network;
The prediction of skin attribute is carried out to facial image based on fusion feature, obtains the prediction label of skin attribute;
Prediction label and known mark label training neural network based on acquisition.
The neural network that the training method training provided through this embodiment obtains, it is any of the above-described to can be applied to the present invention In embodiment, to obtain the prediction label of more accurately skin attribute.
In a specific example of the present inventor's face skin attribute recognition approach above-described embodiment, the prediction based on acquisition Label and known mark label training neural network, including:
Error amount is calculated by loss function in prediction label and known mark label based on acquisition;
The parameter in each convolutional layer of neural network is updated by reversed gradient algorithm based on error amount.
The training method provided in the present embodiment be by common reversed gradient method to the parameter in each convolutional layer into Row update, the training method that certain the present embodiment is not intended to restrict the invention, specific training can also pass through the prior art In other training methods realize.
In a specific example of the present inventor's face skin attribute recognition approach above-described embodiment, further include:
Using the neural network after undated parameter as neural network, each convolutional layer in neural network is iterating through to sample Facial image in image carries out feature extraction;Sample image is labeled at least one known mark label;To in neural network The feature of at least one shallower convolutional layer extraction is merged with the last one convolutional layer extraction feature, obtains fusion feature;Base The prediction of skin attribute is carried out to facial image in fusion feature, obtains the prediction label of skin attribute;Prediction based on acquisition Error amount is calculated by loss function in label and known mark label;Based on error amount by reversed gradient algorithm to nerve Parameter in each convolutional layer of network is updated;Until neural network meets preset condition.
Preset condition includes following any one:
Loss function convergence, iterations reach preset times and error amount is less than preset value.
In the present embodiment, the prediction process of skin attribute is performed for the neural network iteration after undated parameter, it is known that To neural network meet preset condition, preset condition includes but not limited to loss function convergence, iterations reach default time Number or error amount are less than preset value.
The further embodiment of the present inventor's face skin attribute recognition approach, on the basis of the various embodiments described above, is based on The prediction label of acquisition and known mark label training neural network, including:
Error amount is calculated by loss function in prediction label and known mark label based on acquisition;
Error amount is directly opposite at least one shallower convolutional layer and most for traveling in neural network and obtaining fusion feature In the latter convolutional layer, error amount is propagated backward to each convolutional layer of neural network by reversed gradient algorithm;
The parameter in each convolutional layer is updated based on the error amount traveled in each convolutional layer.
In the present embodiment, error amount can be made with reference to the short connection being directly connected between shallow-layer convolutional layer and loss function layer Lost during gradient passback few, model can restrain faster, but during only with returning error amount between short connection, easily produce The skimble-scamble problem of parameter between raw each convolutional layer, therefore, the present embodiment by short connection gradient while being returned, knot It closes and returns error by the convolutional layer of neural network sequence between reversed gradient algorithm, while training for promotion speed, ensure that The unification of parameter;When overcoming training method of the prior art gradient reverse conduction is carried out in training, in the number of plies of network When deeper, gradient can be caused to generate gradient attenuation when returning, and then the convolution nuclear parameter of shallow-layer is caused to update the problem of slower.
In a specific example of the present inventor's face skin attribute recognition approach above-described embodiment, further include:
Using the neural network after undated parameter as neural network, each convolutional layer in neural network is iterating through to sample Facial image in image carries out feature extraction;Sample image is labeled at least one known mark label;To in neural network The feature of at least one shallower convolutional layer extraction is merged with the last one convolutional layer extraction feature, obtains fusion feature;Base The prediction of skin attribute is carried out to the facial image in fusion feature, obtains the prediction label of skin attribute;Based on acquisition Error amount is calculated by loss function in prediction label and known mark label;Error amount is directly opposite and travels to nerve net In at least one shallower convolutional layer and the last one convolutional layer that fusion feature is obtained in network, error amount is calculated by reversed gradient Method propagates backward to each convolutional layer of neural network;Based on the error amount traveled in each convolutional layer to the parameter in each convolutional layer It is updated;Until neural network meets preset condition.
Preset condition includes following any one:
Loss function convergence, iterations reach preset times and error amount is less than preset value.
In the present embodiment, the prediction process of skin attribute is performed for the neural network iteration after undated parameter, it is known that To neural network meet preset condition, preset condition includes but not limited to loss function convergence, iterations reach default time Number or error amount are less than preset value.
One of ordinary skill in the art will appreciate that:Realizing all or part of step of above method embodiment can pass through The relevant hardware of program instruction is completed, and aforementioned program can be stored in a computer read/write memory medium, the program When being executed, step including the steps of the foregoing method embodiments is performed;And aforementioned storage medium includes:ROM, RAM, magnetic disc or light The various media that can store program code such as disk.
Fig. 3 is the structure diagram of the present inventor's face skin property recognition means one embodiment.The device of the embodiment Available for realizing the above-mentioned each method embodiment of the present invention.As shown in figure 3, the device of the embodiment includes:
Feature extraction unit 31, for pass through each convolutional layer in neural network to the facial image in images to be recognized into Row feature extraction.
Specifically, each convolutional layer in neural network carries out feature extraction to the facial image in images to be recognized successively.
Fusion Features unit 32, for the feature to shallower convolutional layer extraction at least one in neural network and the last one The feature of convolutional layer extraction is merged, and obtains fusion feature.
Wherein, shallower convolutional layer is other convolutional layers in neural network in addition to the last one convolutional layer;Shallower convolution The feature of layer extraction can embody the skin quality problem of face skin (such as:Small pox, scar), overcome nerve net in the prior art Network only utilize the last one convolutional layer output feature, caused by can only identification division skin attribute the problem of;By multilayer spy Sign fusion, and using the judgement of fusion feature progress face skin attribute, get a promotion to the recognition effect of skin attribute.
Attribute forecast unit 33 for carrying out the prediction of skin attribute to facial image based on fusion feature, obtains skin The prediction label of attribute.
Based on a kind of face skin attribute identification device that the above embodiment of the present invention provides, by each in neural network Convolutional layer carries out feature extraction to the facial image in images to be recognized;To shallower convolutional layer extraction at least one in neural network Feature merged with the feature that the last one convolutional layer extracts, obtain fusion feature;God can be obtained by fusion feature Shallow-layer feature and further feature through network utilize the feature of shallower convolutional layer extraction and the feature of the last one convolutional layer extraction Comprehensive judgement to skin attribute is realized, by sharing multiple intermediate layer parameters in neural network so that often increase a volume During outer Attribute Recognition task, it is only necessary to finally add several layers for the understanding in neural network, number of parameters will not be caused Rise at double;Some loss in detail caused by the feature for only obtaining the last one convolutional layer in the prior art are overcome, are caused Skin attribute judges the problem of inaccurate;The prediction of skin attribute is carried out to facial image based on fusion feature;It realizes to face The prediction of the different skin attribute of skin.
In a specific example of the present inventor's face skin property recognition means above-described embodiment, Fusion Features unit 32, including:
Change of scale module exports at least one feature for being based on each shallower convolutional layer of neural network, will be based on shallower At least one feature of convolutional layer output carries out change of scale, obtains the characteristic dimension size phase exported with the last one convolutional layer Same feature;
Feature stack module obtains fusion feature for the identical each feature of scale size to be stacked.
In a specific example of the present inventor's face skin property recognition means above-described embodiment, change of scale module, Specifically at least one feature exported based on shallower convolutional layer is carried out pondization operation, obtain defeated with the last one convolutional layer The identical feature of the characteristic dimension size that goes out.
In a specific example of the present inventor's face skin property recognition means above-described embodiment, change of scale module into Capable pondization operation includes:
According to average pondization and the alternate pondization strategy of maximum pondization, the feature of each shallower convolutional layer extraction is carried out successively Pondization operates.
In a specific example of the present inventor's face skin property recognition means above-described embodiment, feature stack module, Specifically for the identical each feature of scale size is stacked gradually together by axis of channel, fusion feature, fusion feature are obtained Dimension correspond to the sum of each convolutional layer output channel.
In another embodiment of the present inventor's face skin property recognition means, on the basis of the various embodiments described above, attribute Specifically for passing through the full articulamentum in neural network, skin category is carried out based on fusion feature to facial image for predicting unit 33 The prediction of property;
The present embodiment face skin attribute identification device, further includes:
Dimensionality reduction unit carries out dimensionality reduction for passing through a dimensionality reduction convolutional layer to fusion feature.
In the present embodiment, since the dimension of fusion feature corresponds to the sum of port number of at least two convolutional layers output, because This, the dimension of fusion feature is larger, and for the ease of subsequently identifying, dimensionality reduction is carried out to fusion feature by the dimensionality reduction convolutional layer.
In a specific example of the present inventor's face skin property recognition means the various embodiments described above, in dimensionality reduction convolutional layer Convolution kernel number be less than preset value, the size of convolution kernel is 1;
Dimensionality reduction unit performs volume specifically for the dimensionality reduction convolutional layer for being less than preset value based on convolution kernel number to fusion feature Product operation obtains the fusion feature figure that dimension is convolution kernel number.
Another embodiment of the present inventor's face skin property recognition means, on the basis of the various embodiments described above, is also wrapped It includes:
Face identification unit for carrying out Face datection to images to be recognized, obtains facial image, and from images to be recognized Middle extraction facial image.
In the present embodiment, carried out due to the present inventor's face skin attribute recognition approach primarily directed to face skin, institute To extract facial image firstly the need of from images to be recognized, the prior art can be passed through by extracting the process of facial image In neural network carry out Face datection, facial image is extracted based on testing result.
In a specific example of the present inventor's face skin property recognition means the various embodiments described above, recognition of face list Member, including:
Position acquisition module, it is special for obtaining at least one face location from images to be recognized using Face datection network The face confidence threshold value for corresponding images to be recognized of seeking peace, face location feature include face location rectangle frame and face confidence Degree;
Position determination module obtains face confidence level for the face location feature based on acquisition and is more than face confidence level threshold The face location rectangle frame of value;
Face acquisition module performs face location rectangle frame the inspection of face key point for being based on face key spot net It surveys, obtains face key point, facial image is obtained from face location figure based on face key point.
In a specific example of the present inventor's face skin property recognition means the various embodiments described above, face location feature Further include facial angle;
The present embodiment face skin attribute identification device, further includes:Angle adjusting, based on facial angle to face position It puts rectangle frame to be adjusted, obtains the face location rectangle frame of positive placement.
In a specific example of the present inventor's face skin property recognition means the various embodiments described above, skin attribute it is pre- Mark label include following any one or more:
Skin quality, skin color, skin brightness.
In a specific example of the present inventor's face skin property recognition means the various embodiments described above, further include:
Beautify unit, for the prediction label based on the skin attribute, landscaping treatment behaviour is carried out to the facial image Make.
The a still further embodiment of the present inventor's face skin property recognition means, on the basis of the various embodiments described above, is also wrapped It includes:
Sample predictions unit, for images to be recognized to be set to sample image, feature based extraction unit, Fusion Features list First prediction label with the skin attribute of the corresponding sample image of attribute forecast unit acquisition.
Wherein, sample image is labeled at least one known mark label.
Network training unit, for the prediction label based on acquisition and known mark label training neural network.
The neural network that the training method training provided through this embodiment obtains, it is any of the above-described to can be applied to the present invention In embodiment, to obtain the prediction label of more accurately skin attribute.
In a specific example of the present inventor's face skin property recognition means the various embodiments described above, network training list Member, including:
Error calculating module is calculated for the prediction label based on acquisition and known mark label by loss function Error amount;
Parameter update module, for based on error amount by reversed gradient algorithm to the ginseng in each convolutional layer of neural network Number is updated.
In a specific example of the present inventor's face skin property recognition means the various embodiments described above, network training list Member further includes:
Iteration update module, for using the neural network after undated parameter as neural network, the extraction of iteration feature based Unit, Fusion Features unit and attribute forecast unit obtain the prediction label of the skin attribute of corresponding sample image;Based on error Computing module and parameter update module are updated the parameter in each convolutional layer;Until neural network meets preset condition.
In a specific example of the present inventor's face skin property recognition means the various embodiments described above, preset condition includes Following any one:
Loss function convergence, iterations reach preset times and error amount is less than preset value.
The further embodiment of the present inventor's face skin property recognition means, on the basis of the various embodiments described above, network Training unit, including:
Error calculating module is calculated for the prediction label based on acquisition and known mark label by loss function Error amount;
Error propagation module obtains fusion feature extremely for being directly opposite error amount to travel in the neural network In a few shallower convolutional layer and the last one convolutional layer, error amount is propagated backward into neural network by reversed gradient algorithm Each convolutional layer;
Parameter update module, for being carried out more to the parameter in each convolutional layer based on the error amount traveled in each convolutional layer Newly.
In the present embodiment, error amount can be made with reference to the short connection being directly connected between shallow-layer convolutional layer and loss function layer Lost during gradient passback few, model can restrain faster, but during only with returning error amount between short connection, easily produce The skimble-scamble problem of parameter between raw each convolutional layer, therefore, the present embodiment by short connection gradient while being returned, knot It closes and returns error by the convolutional layer of neural network sequence between reversed gradient algorithm, while training for promotion speed, ensure that The unification of parameter;When overcoming training method of the prior art gradient reverse conduction is carried out in training, in the number of plies of network When deeper, gradient can be caused to generate gradient attenuation when returning, and then the convolution nuclear parameter of shallow-layer is caused to update the problem of slower.
In a specific example of the present inventor's face skin property recognition means the various embodiments described above, network training list Member further includes:
Iteration update module, using the neural network after undated parameter as neural network, iteration feature based extraction unit, Fusion Features unit and attribute forecast unit obtain the prediction label of the skin attribute of corresponding sample image;Based on error calculation mould Block, error propagation module and parameter update module are updated the parameter in each convolutional layer;It is preset until neural network meets Condition.
In a specific example of the present inventor's face skin property recognition means the various embodiments described above, preset condition includes Following any one:
Loss function convergence, iterations reach preset times and error amount is less than preset value.
One side according to embodiments of the present invention, a kind of electronic equipment provided, including processor, processor includes this Invent the face skin attribute identification device described in any of the above-described embodiment.
One side according to embodiments of the present invention, a kind of electronic equipment provided, including:Memory, can for storing Execute instruction;
And processor, for communicating to perform executable instruction face skin attribute thereby completing the present invention with memory The operation of any of the above-described embodiment of recognition methods.
A kind of one side according to embodiments of the present invention, the computer storage media provided, can for storing computer The instruction of reading, described instruction are performed the behaviour for performing any of the above-described embodiment of the present inventor's face skin attribute recognition approach Make.
The embodiment of the present invention additionally provides a kind of electronic equipment, such as can be mobile terminal, personal computer (PC), put down Plate computer, server etc..Below with reference to Fig. 4, it illustrates suitable for being used for realizing the terminal device of the embodiment of the present application or service The structure diagram of the electronic equipment 400 of device:As shown in figure 4, computer system 400 includes one or more processors, communication Portion etc., one or more of processors are for example:One or more central processing unit (CPU) 401 and/or one or more Image processor (GPU) 413 etc., processor can according to the executable instruction being stored in read-only memory (ROM) 402 or From the executable instruction that storage section 408 is loaded into random access storage device (RAM) 403 perform various appropriate actions and Processing.Communication unit 412 may include but be not limited to network interface card, and the network interface card may include but be not limited to IB (Infiniband) network interface card,
Processor can communicate with read-only memory 402 and/or random access storage device 430 to perform executable instruction, It is connected by bus 404 with communication unit 412 and is communicated through communication unit 412 with other target devices, is implemented so as to complete the application The corresponding operation of any one method that example provides, for example, by each convolutional layer in neural network to the people in images to be recognized Face image carries out feature extraction;The feature of shallower convolutional layer extraction at least one in neural network is carried with the last one convolutional layer The feature taken is merged, and obtains fusion feature;The prediction of skin attribute is carried out to facial image based on fusion feature, obtains skin The prediction label of skin attribute.
In addition, in RAM 403, it can also be stored with various programs and data needed for device operation.CPU401、ROM402 And RAM403 is connected with each other by bus 404.In the case where there is RAM403, ROM402 is optional module.RAM403 is stored Executable instruction is written in executable instruction into ROM402 at runtime, and it is above-mentioned logical that executable instruction performs processor 401 The corresponding operation of letter method.Input/output (I/O) interface 405 is also connected to bus 404.Communication unit 412 can be integrally disposed, It may be set to be with multiple submodule (such as multiple IB network interface cards), and in bus link.
I/O interfaces 405 are connected to lower component:Importation 406 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 407 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 408 including hard disk etc.; And the communications portion 409 of the network interface card including LAN card, modem etc..Communications portion 409 via such as because The network of spy's net performs communication process.Driver 410 is also according to needing to be connected to I/O interfaces 405.Detachable media 411, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 410, as needed in order to be read from thereon Computer program be mounted into storage section 408 as needed.
Need what is illustrated, framework as shown in Figure 4 is only a kind of optional realization method, can root during concrete practice The component count amount and type of above-mentioned Fig. 4 are selected, are deleted, increased or replaced according to actual needs;It is set in different function component Put, can also be used it is separately positioned or integrally disposed and other implementations, such as GPU and CPU separate setting or can be by GPU collection Into on CPU, communication unit separates setting, can also be integrally disposed on CPU or GPU, etc..These interchangeable embodiments Each fall within protection domain disclosed by the invention.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product, it is machine readable including being tangibly embodied in Computer program on medium, computer program are included for the program code of the method shown in execution flow chart, program code It may include the corresponding instruction of corresponding execution method and step provided by the embodiments of the present application, for example, passing through each volume in neural network Lamination carries out feature extraction to the facial image in images to be recognized;To shallower convolutional layer extraction at least one in neural network Feature is merged with the feature that the last one convolutional layer extracts, and obtains fusion feature;Based on fusion feature to facial image into The prediction of row skin attribute obtains the prediction label of skin attribute.In such embodiments, which can pass through Communications portion 409 is downloaded and installed from network and/or is mounted from detachable media 411.In the computer program by When Central Processing Unit (CPU) 401 performs, the above-mentioned function of being limited in the present processes is performed.
Methods and apparatus of the present invention, equipment may be achieved in many ways.For example, software, hardware, firmware can be passed through Or any combinations of software, hardware, firmware realize methods and apparatus of the present invention, equipment.The step of for method Sequence is stated merely to illustrate, the step of method of the invention is not limited to sequence described in detail above, unless with other Mode illustrates.In addition, in some embodiments, the present invention can be also embodied as recording program in the recording medium, this A little programs include being used to implement machine readable instructions according to the method for the present invention.Thus, the present invention also covering stores to hold The recording medium of the program of row according to the method for the present invention.
Description of the invention provides for the sake of example and description, and is not exhaustively or will be of the invention It is limited to disclosed form.Many modifications and variations are obvious for the ordinary skill in the art.It selects and retouches It states embodiment and is to more preferably illustrate the principle of the present invention and practical application, and those of ordinary skill in the art is enable to manage The solution present invention is so as to design the various embodiments with various modifications suitable for special-purpose.

Claims (10)

1. a kind of face skin attribute recognition methods, which is characterized in that including:
Feature extraction is carried out to the facial image in images to be recognized by each convolutional layer in neural network;
To the feature of shallower convolutional layer extraction at least one in the neural network and the feature of the last one convolutional layer extraction into Row fusion, obtains fusion feature;The shallower convolutional layer be the neural network in addition to the last one described convolutional layer Other convolutional layers;
The prediction of skin attribute is carried out to the facial image based on the fusion feature, obtains the prediction label of skin attribute.
2. according to the method described in claim 1, it is characterized in that, shallower convolutional layer at least one in the neural network is carried The feature taken is merged with the last one convolutional layer extraction feature, obtains fusion feature, including:
At least one feature is exported based on each shallower convolutional layer of the neural network, by the institute based on the shallower convolutional layer output It states at least one feature and carries out change of scale, obtain the feature identical with the characteristic dimension size that the last one convolutional layer exports;
The identical each feature of scale size is stacked and obtains fusion feature.
3. according to the method described in claim 2, it is characterized in that, it is described will based on described in the shallower convolutional layer output extremely A few feature carries out change of scale, obtains the feature identical with the characteristic dimension size that the last one convolutional layer exports, including:
At least one feature based on the shallower convolutional layer output is subjected to pondization operation, is obtained and the last one convolution The identical feature of characteristic dimension size of layer output.
4. according to the method described in claim 3, it is characterized in that, it is described will based on described in the shallower convolutional layer output extremely A few feature carries out pondization operation, including:
According to average pondization and the alternate pondization strategy of maximum pondization, the feature of each shallower convolutional layer extraction is carried out successively Pondization operates.
5. according to any methods of claim 2-4, which is characterized in that each feature that scale size is identical stacks Fusion feature is obtained, including:
The identical each feature of the scale size by axis of channel is stacked gradually together, obtains fusion feature, the fusion The dimension of feature corresponds to the sum of each described convolutional layer output channel.
6. according to any methods of claim 1-5, which is characterized in that described to be based on the fusion feature to the face Image carries out the prediction of skin attribute, including:
By the full articulamentum in neural network, the pre- of skin attribute is carried out to the facial image based on the fusion feature It surveys;
Before the prediction for carrying out skin attribute to the facial image based on the fusion feature, further include:
Dimensionality reduction is carried out to the fusion feature by a dimensionality reduction convolutional layer.
7. a kind of face skin attribute identification device, which is characterized in that including:
Feature extraction unit carries out feature for passing through each convolutional layer in neural network to the facial image in images to be recognized Extraction;
Fusion Features unit, for the feature and the last one volume to shallower convolutional layer extraction at least one in the neural network The feature of lamination extraction is merged, and obtains fusion feature;The shallower convolutional layer is except described last in the neural network Other convolutional layers except one convolutional layer;
Attribute forecast unit for carrying out the prediction of skin attribute to the facial image based on the fusion feature, obtains skin The prediction label of skin attribute.
8. a kind of electronic equipment, which is characterized in that including processor, the processor includes the face skin described in claim 7 Skin property recognition means.
9. a kind of electronic equipment, which is characterized in that including:Memory, for storing executable instruction;
And processor, for communicating to perform the executable instruction so as to complete claim 1 to 6 times with the memory The operation of one face skin attribute recognition methods of meaning.
10. a kind of computer storage media, for storing computer-readable instruction, which is characterized in that described instruction is held Perform claim requires the operation of face skin attribute recognition methods described in 1 to 6 any one during row.
CN201710927454.7A 2017-09-30 2017-09-30 Face skin attribute identification method and device, electronic equipment and storage medium Active CN108229296B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710927454.7A CN108229296B (en) 2017-09-30 2017-09-30 Face skin attribute identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710927454.7A CN108229296B (en) 2017-09-30 2017-09-30 Face skin attribute identification method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108229296A true CN108229296A (en) 2018-06-29
CN108229296B CN108229296B (en) 2021-04-02

Family

ID=62655496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710927454.7A Active CN108229296B (en) 2017-09-30 2017-09-30 Face skin attribute identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108229296B (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165666A (en) * 2018-07-05 2019-01-08 南京旷云科技有限公司 Multi-tag image classification method, device, equipment and storage medium
CN109614900A (en) * 2018-11-29 2019-04-12 深圳和而泰数据资源与云技术有限公司 Image detecting method and device
CN109730637A (en) * 2018-12-29 2019-05-10 中国科学院半导体研究所 A kind of face face-image quantified system analysis and method
CN110046577A (en) * 2019-04-17 2019-07-23 北京迈格威科技有限公司 Pedestrian's attribute forecast method, apparatus, computer equipment and storage medium
CN110069994A (en) * 2019-03-18 2019-07-30 中国科学院自动化研究所 Face character identifying system, method based on face multizone
CN110321778A (en) * 2019-04-26 2019-10-11 北京市商汤科技开发有限公司 A kind of face image processing process, device and storage medium
CN110348358A (en) * 2019-07-03 2019-10-18 网易(杭州)网络有限公司 A kind of Face Detection system, method, medium and calculate equipment
CN110599554A (en) * 2019-09-16 2019-12-20 腾讯科技(深圳)有限公司 Method and device for identifying face skin color, storage medium and electronic device
CN110633700A (en) * 2019-10-21 2019-12-31 深圳市商汤科技有限公司 Video processing method and device, electronic equipment and storage medium
CN111144310A (en) * 2019-12-27 2020-05-12 创新奇智(青岛)科技有限公司 Face detection method and system based on multi-layer information fusion
CN111191527A (en) * 2019-12-16 2020-05-22 北京迈格威科技有限公司 Attribute identification method and device, electronic equipment and readable storage medium
CN111325732A (en) * 2020-02-20 2020-06-23 深圳数联天下智能科技有限公司 Facial residue detection method and related device
WO2020124390A1 (en) * 2018-12-18 2020-06-25 华为技术有限公司 Face attribute recognition method and electronic device
CN111339813A (en) * 2019-09-30 2020-06-26 深圳市商汤科技有限公司 Face attribute recognition method and device, electronic equipment and storage medium
CN111353349A (en) * 2018-12-24 2020-06-30 杭州海康威视数字技术股份有限公司 Human body key point detection method and device, electronic equipment and storage medium
CN111598131A (en) * 2020-04-17 2020-08-28 北京百度网讯科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111860169A (en) * 2020-06-18 2020-10-30 北京旷视科技有限公司 Skin analysis method, device, storage medium and electronic equipment
CN112288345A (en) * 2019-07-25 2021-01-29 顺丰科技有限公司 Method and device for detecting loading and unloading port state, server and storage medium
CN113128526A (en) * 2021-06-17 2021-07-16 腾讯科技(深圳)有限公司 Image recognition method and device, electronic equipment and computer-readable storage medium
CN113469950A (en) * 2021-06-08 2021-10-01 海南电网有限责任公司电力科学研究院 Method for diagnosing abnormal heating defect of composite insulator based on deep learning
CN113796826A (en) * 2020-06-11 2021-12-17 懿奈(上海)生物科技有限公司 Method for detecting skin age of human face of Chinese
CN111860169B (en) * 2020-06-18 2024-04-30 北京旷视科技有限公司 Skin analysis method, device, storage medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9454714B1 (en) * 2013-12-09 2016-09-27 Google Inc. Sequence transcription with deep neural networks
CN106228162A (en) * 2016-07-22 2016-12-14 王威 A kind of quick object identification method of mobile robot based on degree of depth study
CN107169504A (en) * 2017-03-30 2017-09-15 湖北工业大学 A kind of hand-written character recognition method based on extension Non-linear Kernel residual error network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9454714B1 (en) * 2013-12-09 2016-09-27 Google Inc. Sequence transcription with deep neural networks
CN106228162A (en) * 2016-07-22 2016-12-14 王威 A kind of quick object identification method of mobile robot based on degree of depth study
CN107169504A (en) * 2017-03-30 2017-09-15 湖北工业大学 A kind of hand-written character recognition method based on extension Non-linear Kernel residual error network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHIUN-LI CHIN ET.AL: "Facial skin image classification system using Convolutional Neural Networks deep learning algorithm", 《2018 9TH INTERNATIONAL CONFERENCE ON AWARENESS SCIENCE AND TECHNOLOGY (ICAST)》 *
JHAN S. ALARIFI ET.AL: "Facial Skin Classification Using Convolutional Neural Networks", 《ICIAR 2017:IMAGE ANALYSIS AND RECOGNITION》 *
JIN QI ET.AL: "GLOBAL AND LOCAL INFORMATION BASED DEEP NETWORK FOR SKIN LESION SEGMENTATION", 《ARXIV:1703.05467V1 [CS.CV]》 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165666A (en) * 2018-07-05 2019-01-08 南京旷云科技有限公司 Multi-tag image classification method, device, equipment and storage medium
CN109614900A (en) * 2018-11-29 2019-04-12 深圳和而泰数据资源与云技术有限公司 Image detecting method and device
WO2020124390A1 (en) * 2018-12-18 2020-06-25 华为技术有限公司 Face attribute recognition method and electronic device
CN111353349A (en) * 2018-12-24 2020-06-30 杭州海康威视数字技术股份有限公司 Human body key point detection method and device, electronic equipment and storage medium
CN111353349B (en) * 2018-12-24 2023-10-17 杭州海康威视数字技术股份有限公司 Human body key point detection method and device, electronic equipment and storage medium
CN109730637A (en) * 2018-12-29 2019-05-10 中国科学院半导体研究所 A kind of face face-image quantified system analysis and method
CN110069994A (en) * 2019-03-18 2019-07-30 中国科学院自动化研究所 Face character identifying system, method based on face multizone
CN110069994B (en) * 2019-03-18 2021-03-23 中国科学院自动化研究所 Face attribute recognition system and method based on face multiple regions
CN110046577B (en) * 2019-04-17 2022-07-26 北京迈格威科技有限公司 Pedestrian attribute prediction method, device, computer equipment and storage medium
CN110046577A (en) * 2019-04-17 2019-07-23 北京迈格威科技有限公司 Pedestrian's attribute forecast method, apparatus, computer equipment and storage medium
CN110321778B (en) * 2019-04-26 2022-04-05 北京市商汤科技开发有限公司 Face image processing method and device and storage medium
CN110321778A (en) * 2019-04-26 2019-10-11 北京市商汤科技开发有限公司 A kind of face image processing process, device and storage medium
CN110348358A (en) * 2019-07-03 2019-10-18 网易(杭州)网络有限公司 A kind of Face Detection system, method, medium and calculate equipment
CN110348358B (en) * 2019-07-03 2021-11-23 网易(杭州)网络有限公司 Skin color detection system, method, medium and computing device
CN112288345A (en) * 2019-07-25 2021-01-29 顺丰科技有限公司 Method and device for detecting loading and unloading port state, server and storage medium
CN110599554A (en) * 2019-09-16 2019-12-20 腾讯科技(深圳)有限公司 Method and device for identifying face skin color, storage medium and electronic device
CN111339813B (en) * 2019-09-30 2022-09-27 深圳市商汤科技有限公司 Face attribute recognition method and device, electronic equipment and storage medium
WO2021063056A1 (en) * 2019-09-30 2021-04-08 深圳市商汤科技有限公司 Facial attribute recognition method and apparatus, and electronic device and storage medium
CN111339813A (en) * 2019-09-30 2020-06-26 深圳市商汤科技有限公司 Face attribute recognition method and device, electronic equipment and storage medium
CN110633700A (en) * 2019-10-21 2019-12-31 深圳市商汤科技有限公司 Video processing method and device, electronic equipment and storage medium
CN111191527A (en) * 2019-12-16 2020-05-22 北京迈格威科技有限公司 Attribute identification method and device, electronic equipment and readable storage medium
CN111191527B (en) * 2019-12-16 2024-03-12 北京迈格威科技有限公司 Attribute identification method, attribute identification device, electronic equipment and readable storage medium
CN111144310A (en) * 2019-12-27 2020-05-12 创新奇智(青岛)科技有限公司 Face detection method and system based on multi-layer information fusion
CN111325732A (en) * 2020-02-20 2020-06-23 深圳数联天下智能科技有限公司 Facial residue detection method and related device
CN111598131A (en) * 2020-04-17 2020-08-28 北京百度网讯科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
US11727676B2 (en) 2020-04-17 2023-08-15 Beijing Baidu Netcom Science And Technology Co., Ltd. Feature fusion method and apparatus for image processing, electronic device and storage medium
CN111598131B (en) * 2020-04-17 2023-08-25 北京百度网讯科技有限公司 Image processing method, device, electronic equipment and storage medium
CN113796826A (en) * 2020-06-11 2021-12-17 懿奈(上海)生物科技有限公司 Method for detecting skin age of human face of Chinese
CN111860169A (en) * 2020-06-18 2020-10-30 北京旷视科技有限公司 Skin analysis method, device, storage medium and electronic equipment
CN111860169B (en) * 2020-06-18 2024-04-30 北京旷视科技有限公司 Skin analysis method, device, storage medium and electronic equipment
CN113469950A (en) * 2021-06-08 2021-10-01 海南电网有限责任公司电力科学研究院 Method for diagnosing abnormal heating defect of composite insulator based on deep learning
CN113128526A (en) * 2021-06-17 2021-07-16 腾讯科技(深圳)有限公司 Image recognition method and device, electronic equipment and computer-readable storage medium

Also Published As

Publication number Publication date
CN108229296B (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN108229296A (en) The recognition methods of face skin attribute and device, electronic equipment, storage medium
CN108229479A (en) The training method and device of semantic segmentation model, electronic equipment, storage medium
CN108197532B (en) The method, apparatus and computer installation of recognition of face
CN108229318A (en) The training method and device of gesture identification and gesture identification network, equipment, medium
CN108229298A (en) The training of neural network and face identification method and device, equipment, storage medium
CN111738243B (en) Method, device and equipment for selecting face image and storage medium
CN108229499A (en) Certificate recognition methods and device, electronic equipment and storage medium
CN108460338A (en) Estimation method of human posture and device, electronic equipment, storage medium, program
CN108830288A (en) Image processing method, the training method of neural network, device, equipment and medium
CN108229341A (en) Sorting technique and device, electronic equipment, computer storage media, program
CN108229303A (en) Detection identification and the detection identification training method of network and device, equipment, medium
CN108229280A (en) Time domain motion detection method and system, electronic equipment, computer storage media
CN108229324A (en) Gesture method for tracing and device, electronic equipment, computer storage media
CN108280451A (en) Semantic segmentation and network training method and device, equipment, medium, program
CN107229952A (en) The recognition methods of image and device
CN108959474B (en) Entity relation extraction method
CN109241988A (en) Feature extracting method and device, electronic equipment, storage medium, program product
CN108235116A (en) Feature propagation method and device, electronic equipment, program and medium
CN108492301A (en) A kind of Scene Segmentation, terminal and storage medium
CN108229532A (en) Image-recognizing method, device and electronic equipment
CN110222573A (en) Face identification method, device, computer equipment and storage medium
CN109598249A (en) Dress ornament detection method and device, electronic equipment, storage medium
CN111401219B (en) Palm key point detection method and device
CN115861462B (en) Training method and device for image generation model, electronic equipment and storage medium
CN109034012A (en) First person gesture identification method based on dynamic image and video sequence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant