CN108171208A - Information acquisition method and device - Google Patents
Information acquisition method and device Download PDFInfo
- Publication number
- CN108171208A CN108171208A CN201810046227.8A CN201810046227A CN108171208A CN 108171208 A CN108171208 A CN 108171208A CN 201810046227 A CN201810046227 A CN 201810046227A CN 108171208 A CN108171208 A CN 108171208A
- Authority
- CN
- China
- Prior art keywords
- skin attribute
- area
- skin
- face
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The embodiment of the present application discloses information acquisition method and device.One specific embodiment of this method includes:Obtain the face-image of user;Extract at least one characteristic area of the face-image;The characteristic area is input to skin attribute detection model trained in advance, determines the skin attribute information of user's face, skin attribute information is used to indicate skin attribute.The embodiment can improve the speed and accuracy of the skin attribute of determining user.
Description
Technical field
The invention relates to field of computer technology, and in particular to field of artificial intelligence more particularly to information
Acquisition methods and device.
Background technology
Everyone skin attribute is different, and skin attribute for example can be Oily, dry skin, mixing
Type skin, sensitive etc..The skin attribute of skin of face can represent the skin attribute of a people to a certain extent.
For same skin attribute, since individual shows as the attention rate of skin and the difference of processing method
Larger difference.
In addition, under different fatigue states and all ages and classes stage, the skin attribute of same person can change.
If refusing regular meeting to skin attribute, skin condition may be caused to deteriorate or even cause the relevant disease of skin.
Invention content
The embodiment of the present application proposes a kind of information acquisition method and device.
In a first aspect, the embodiment of the present application provides a kind of information acquisition method, this method includes:Obtain the face of user
Image;Extract at least one characteristic area of the face-image;The characteristic area is input to skin category trained in advance
Property detection model, determines the skin attribute information of user's face, and the skin attribute information is used to indicate skin attribute.
In some embodiments, characteristic area include it is following at least one:Pouch area, canthus area and trigonum;And it carries
At least one characteristic area of face-image is taken to include:At least one characteristic area is extracted based on critical point detection.
In some embodiments, skin attribute detection model includes pouch area skin attribute detection submodel, canthus area skin
Skin detection of attribute submodel and trigonum skin attribute detection submodel;And characteristic area is input to skin trained in advance
Detection of attribute model determines the skin attribute information of user, including:Pouch area, canthus area and trigonum are separately input to eye
Bag area skin attribute detection submodel, canthus area skin attribute detection submodel and trigonum skin attribute detection submodel, really
Determine pouch area, the corresponding pre-set skin attribute each in multiple pre-set skin attributes in canthus area and trigonum upper probability
Value;Pouch area, the corresponding skin attribute information in canthus area and trigonum are determined according to probability value.
In some embodiments, determine that the skin attribute information of user further includes:For in multiple pre-set skin attributes
Any pre-set skin attribute performs following weighted sum operation:Pouch area, canthus area and trigonum are calculated respectively in the pre-set skin
Probability value of the weighted sum of probability value on attribute as user's face on the pre-set skin attribute;According to user's face each
Probability value on a pre-set skin attribute determines the skin attribute information of user's face.
In some embodiments, skin attribute detection model is convolutional neural networks model.
In some embodiments, before the face-image for obtaining user, information acquisition method further includes:Use multiple strokes
The image for having divided characteristic area and skin attribute mark being added to characteristic area is trained skin attribute detection model.
Second aspect, the embodiment of the present application provide a kind of information acquisition device, which includes:Acquiring unit, configuration
For obtaining the face-image of user;Extraction unit is configured at least one characteristic area of extraction face-image;It determines single
Member is configured to for characteristic area to be input to skin attribute detection model trained in advance, determines the skin attribute of user's face
Information, skin attribute information are used to indicate skin attribute.
In some embodiments, characteristic area include it is following at least one:Pouch area, canthus area and trigonum;And it carries
Unit is taken further to be configured to extract at least one characteristic area based on critical point detection.
In some embodiments, skin attribute detection model includes pouch area skin attribute detection submodel, canthus area skin
Skin detection of attribute submodel and trigonum skin attribute detection submodel;And determination unit is further configured to:By pouch
Area, canthus area and trigonum are separately input to pouch area skin attribute detection submodel, canthus area skin attribute detection submodel
Submodel is detected with trigonum skin attribute, determines that pouch area, canthus area and trigonum are corresponding in multiple pre-set skins
The upper probability value of each skin attribute in attribute;Determine that pouch area, canthus area and trigonum are corresponding according to probability distribution
Skin attribute information.
In some embodiments, determination unit is further configured to:For any pre- in multiple pre-set skin attributes
If skin attribute, following weighted sum operation is performed:Calculating pouch area, canthus area and trigonum are respectively on the pre-set skin attribute
Probability value probability value of the weighted sum as user's face on the pre-set skin attribute;According to user's face each default
Probability value on skin attribute determines the skin attribute information of user's face.
In some embodiments, skin attribute detection model is convolutional neural networks model.
In some embodiments, information push unit device further includes training unit, and training unit is configured to:It is obtaining
Before unit obtains the face-image of user, characteristic area is divided using multiple and skin attribute is added to characteristic area
The image of mark is trained skin attribute detection model.
The third aspect, the embodiment of the present application provide a kind of server and include:One or more processors;Storage device,
For storing one or more programs, when said one or multiple programs are performed by said one or multiple processors so that
Said one or multiple processors realize the method as described in realization method any in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey
Sequence, wherein, the method as described in realization method any in first aspect is realized when which is executed by processor.
Information acquisition method and device provided by the embodiments of the present application, by least one spy for extracting user's face image
Region is levied, above-mentioned at least one characteristic area is then input to skin attribute detection model trained in advance, determines user plane
The skin attribute information in portion.So as to improve the speed of the skin attribute of determining user's face and accuracy.
Description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the information acquisition method of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the information acquisition method of the application;
Fig. 4 is the flow chart according to another embodiment of the information acquisition method of the application;
Fig. 5 is the flow chart according to another embodiment of the information acquisition method of the application;
Fig. 6 is the structure diagram according to one embodiment of the information acquisition device of the application;
Fig. 7 is adapted for the structure diagram of the computer system of the electronic equipment for realizing the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the exemplary system of the embodiment of the information acquisition method that can apply the application or information acquisition device
System framework 100.
As shown in Figure 1, system architecture 100 can include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
Terminal device 101,102,103 can be photographic device, can also be the various electronics for having image collecting function
Equipment, including but not limited to video camera, smart mobile phone, tablet computer, pocket computer on knee and desktop computer etc..
Server 105 can be to provide the server of various services, such as collected to terminal device 101,102,103
The background server that image data is handled.Background server can carry out the processing such as analyzing to the image data received
(such as skin attribute analysis).
It should be noted that the information acquisition method that the embodiment of the present application is provided can be performed by server 105, accordingly
Ground, information acquisition device can be set in server 105.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need
Will, can have any number of terminal device, network and server.
With continued reference to Fig. 2, it illustrates the flows 200 of one embodiment of the information acquisition method according to the application.It should
Information acquisition method includes the following steps:
Step 201, the face-image of user is obtained.
In the present embodiment, the electronic equipment (such as server 105 shown in FIG. 1) of information acquisition method operation thereon
It can be by wired connection mode or radio connection from terminal device (such as Fig. 1 institutes that can shoot image or video
The terminal device 101 that shows, 102,103) receive the image for including user's face.
When electronic equipment receives the image comprising user's face, various analyzing and processing can be carried out to image, so as to
The face-image for including user in the picture can be obtained.Such as the means using existing recognition of face can be illustrated upper
State the face-image that user is identified in image.
Step 202, at least one characteristic area of face-image is extracted.
In the present embodiment, the face-image based on the user obtained in step 201, above-mentioned electronic equipment (such as Fig. 1 institutes
The server shown) face-image of user can be analyzed using various analysis means, so as to extract at least one spy
Levy region.
Some regions of one human facial skin can reflect the skin attribute of this individual's skin, such as people's eye
Neighbouring skin can reflect whether he/her has skin attributes such as livid ring around eye, pouch, eyeprint etc..
It will can reflect characteristic area of the region as face of the skin attribute of an individual's skin in advance.It such as can
Using by forehead region, wing of nose region and canthus region etc. as characteristic area.Above-mentioned electronic equipment can by it is at least one these
The extracted region for representing the skin attribute of an individual's skin comes out.
In some optional realization methods of the present embodiment, features described above region can include it is following at least one:Pouch
Area, canthus area and trigonum.
In some optional realization methods of the present embodiment, above-mentioned electronic equipment can be based on critical point detection and extract user
Face-image at least one characteristic area.The face wheel in the face-image of user can be positioned according to critical point detection
Exterior feature, the key positions such as two eyes, nose and mouth.It is and opposite with above-mentioned preassigned characteristic area according to above-mentioned key position
Position relationship extracts above-mentioned at least one characteristic area.Here, the quantity of key point used in critical point detection can be
It 21 points, 27 points, 68 points or 85 points etc., does not limit herein.In application scenes, above-mentioned critical point detection can use volume
Neural network is accumulated to realize.
It should be noted that the relevant technologies of above-mentioned critical point detection are the known technologies studied and applied extensively at present,
Details are not described herein.
Step 203, characteristic area is input to skin attribute detection model trained in advance, determines the skin attribute of user
Information.
In the present embodiment, the electronic equipment (such as server 105 shown in FIG. 1) of information acquisition method operation thereon
At least one characteristic area extracted in step 202 can be input to skin attribute detection model trained in advance.It is above-mentioned
Skin attribute detection model can be machine learning model.Skin attribute detection model can be determined according to the characteristic area of input
The skin attribute information of user's face.Above-mentioned skin attribute information is used to indicate skin attribute.
Herein, skin attribute can for example include:In dryness, oiliness, wrinkle, color spot, relaxation, livid ring around eye, pouch
It is one or more.
Skin attribute detection model is such as can be supporting vector machine model, Bayesian model.
In some optional realization methods of the present embodiment, above-mentioned skin attribute detection model can be convolutional neural networks
Model.
In some optional realization methods of the present embodiment, above-mentioned electronic equipment obtain user face-image before,
Multiple images for having divided characteristic area and skin attribute mark being added to characteristic area can also be used to above-mentioned skin
Detection of attribute model is trained.After being trained to skin attribute detection model, above-mentioned skin attribute detection model can be with
Identify the probability value that the skin in the image of the characteristic area of input is belonging respectively on each preset skin attribute.With instruction
Practice the increase of sample, the output valve of skin attribute detection model gradually tends to actual value.
In some optional realization methods of the present embodiment, above-mentioned electronic equipment can be according to the use obtained in step 203
The skin attribute information of family face, to the relevant letter of skin attribute indicated by the skin attribute of user's push and user's face
Breath.
For example, when the skin attribute indicated by the skin attribute information for determining user's face is Oily, Ke Yixiang
User's push includes the relevant information of facial moisturizing.It is possible to further push the good product of facial moisturizing effect to user
Information.In another example when it is livid ring around eye to determine the skin attribute indicated by the corresponding skin attribute information in user characteristics region, it can
To push the relevant information for increasing the time of having a rest to user.Improve user's sleep quality it is possible to further be pushed to user
Relevant information.So, the specific aim to user's pushed information can be improved.
With continued reference to Fig. 3, Fig. 3 is a schematic diagram according to the application scenarios of the information acquisition method of the present embodiment.
In the application scenarios 300 of Fig. 3, the image 303 including user's face of 302 receiving terminal apparatus 301 of electronic equipment transmission.It is above-mentioned
Electronic equipment 302 can obtain the face-image 304 of user from the image 303 including user's face.Then above-mentioned electronics is set
At least one of the face-image of standby 302 extraction user characteristic area 305.Then features described above region is input to advance instruction
Experienced skin attribute detection model, determines the skin attribute information 306 of user's face, and the terminal device 301 of most rear line pushes away
It send and the relevant information 307 of skin attribute indicated by above-mentioned skin attribute information.
The method that above-described embodiment of the application provides by the characteristic area of the face-image of user by being input to skin
Detection of attribute model determines the skin attribute information with face, can improve the speed of the skin attribute of determining user's face
And accuracy.
With further reference to Fig. 4, it illustrates the flows 400 of another embodiment of information acquisition method.The acquisition of information
The flow 400 of method, includes the following steps:
Step 401, the face-image of user is obtained.
In the present embodiment, the electronic equipment (such as server shown in FIG. 1) of information acquisition method operation thereon can
With by wired connection mode or radio connection from the terminal device of image or video can be shot (such as shown in Fig. 1
Terminal device 101,102,103) receive and include the image of user's face.
When electronic equipment receives the image comprising user's face, various analyzing and processing can be carried out to image, so as to
The face-image for including user in the picture can be obtained.Such as the means of existing recognition of face can be used in above-mentioned figure
The face-image of user is identified as in.
Step 402, the pouch area, canthus area and trigonum of face-image are extracted.
In the present embodiment, the characteristic area of the face-image of user can include pouch area, canthus area and trigonum.On
The features described above area of face-image of the user obtained in step 401 can be extracted based on critical point detection by stating electronic equipment
Domain.
The face contour in the face-image of user, the passes such as two eyes, nose and mouth can be positioned according to critical point detection
Key position.And the canthus area of left eye, the pouch area of left eye, the canthus area of right eye, right eye are determined according to above-mentioned key position
Pouch area.Facial triangle is formed to both sides bicker point line with the central point of two eyes further, it is also possible to orient.
Above-mentioned electronic equipment orient left eye canthus area, the pouch area of left eye, the canthus area of right eye, right eye pouch
After area and trigonum, can extract the characteristic area in the canthus area including left eye, pouch area including left eye feature
Region, the characteristic area in canthus area including right eye, the characteristic area in pouch area including right eye and the spy including trigonum
Region is levied, as the target further analyzed.The size of characteristic area can be adjusted suitably according to the actual needs herein
It is whole, it does not limit herein.
Step 403, skin attribute detection model includes pouch area skin attribute detection submodel, the inspection of canthus area skin attribute
Submodel and trigonum skin attribute detection submodel are surveyed, pouch area, canthus area and trigonum are separately input to pouch area skin
Skin detection of attribute submodel, canthus area skin attribute detection submodel and trigonum skin attribute detection submodel, determine pouch
Area, the upper probability value of the corresponding each skin attribute in multiple pre-set skin attributes in canthus area and trigonum.
In the present embodiment, skin attribute detection model can include pouch area skin attribute detection submodel, canthus area
Skin attribute detects submodel and trigonum skin attribute detection submodel.That is skin attribute detection model is by mutually solely
Vertical pouch area skin attribute detection submodel, canthus area skin attribute detection submodel and trigonum skin attribute detection submodule
Type is formed.Each submodel all can be disaggregated model.For multiple pre-set skin attributes, the output of each submodel can
To be the probability value in above-mentioned multiple pre-set skin attributes on each pre-set skin attribute.Here multiple pre-set skin attributes
Such as can be dryness, oiliness, wrinkle, color spot, relaxation, livid ring around eye, pouch.The output of any of the above-described submodel can be dry
Probability value in property, oiliness, wrinkle, color spot, relaxation, livid ring around eye, pouch on each pre-set skin attribute.
Extract in step 402 the characteristic area in the canthus area including left eye, pouch area including left eye characteristic area
Domain, the characteristic area in canthus area including right eye, the characteristic area in pouch area including right eye and including facial triangle
After characteristic area, above-mentioned electronic equipment can be by the characteristic area in the pouch area including left eye and pouch area including right eye
Characteristic area is input to pouch area skin attribute detection submodel.Pouch area is namely input to the skin attribute detection of pouch area
Submodel, so as to obtain probability value of the pouch area on above-mentioned multiple pre-set skin attributes.By the canthus area including left eye
Characteristic area and the characteristic area in canthus area including right eye be input to canthus area skin attribute detection submodel.Namely will
Canthus area is input to canthus area skin attribute detection submodel, so as to obtain canthus area in above-mentioned multiple pre-set skin attributes
On probability value.Characteristic area including trigonum is input to trigonum skin attribute detection submodel, so as to obtain
Probability value of the trigonum on above-mentioned multiple pre-set skin attributes.
Step 404, pouch area, the corresponding skin attribute information in canthus area and trigonum are determined according to probability value.
In the present embodiment, obtain in step 403 characteristic area pouch area, canthus area and trigonum it is corresponding
In multiple pre-set skin attributes after the upper probability value of each skin attribute, above-mentioned electronic equipment can be according to any one feature
Probability value of the region on each pre-set skin attribute determines the corresponding skin attribute information in this feature region.Here skin category
Property information is used to indicate skin attribute.
It, can be default by one of probability value corresponding with this feature region maximum such as any one characteristic area
Skin attribute is as the corresponding skin attribute in this feature region.Such as when canthus area is in skin attribute dryness, oiliness, wrinkle, color
Spot, relaxation, livid ring around eye, the probability value on pouch are respectively 10%, 10%, 40%, 1%, 4%, 20%, 5%, 10%.Wherein
Maximum 40% corresponding pre-set skin attribute of probability value is wrinkle, above-mentioned electronic equipment can using wrinkle (namely eyeprint) as
The corresponding skin attribute in canthus area.Here, above-mentioned skin attribute information can indicate the information of " wrinkle ".
Above-mentioned skin detection model can detect submodel according to pouch area skin attribute, canthus area skin attribute detects son
It is that model and trigonum skin attribute detection submodel determine respectively as a result, output integrated pouch area skin attribute information,
The skin attribute information of the user's face of canthus area skin attribute information and trigonum skin attribute information.
Since skin detection model includes pouch area skin attribute detection submodel, canthus area skin attribute detection submodel
Submodel is detected with trigonum skin attribute, it, can be respectively to pouch area skin category when being trained to skin detection model
Property detection submodel, canthus area skin attribute detection submodel and trigonum skin attribute detection submodel be trained.Such as
When being trained to pouch area skin attribute detection submodel, multiple can be labelled with to the pouch area image input of skin attribute
It is trained to pouch area skin attribute detection submodel, so that the close mark of output valve of pouch area skin attribute detection submodel
Note value.
It is worth noting that the characteristic area of the face-image of user can also include other characteristic areas.Correspondingly,
Above-mentioned skin detection model can also include skin attribute corresponding with other characteristic areas and detect submodel.
Due to can any feature region in multiple characteristic areas be input to skin category corresponding with this feature region
Property detection submodel in, so that it is determined that the corresponding skin attribute information of each characteristic area, skin attribute inspection can be increased
The accuracy of survey.
In the present embodiment, above-mentioned pouch area skin attribute detection submodel, canthus area skin attribute detection submodel and
Trigonum skin attribute detection submodel can be respectively convolutional neural networks model.
Figure 4, it is seen that compared with the corresponding embodiments of Fig. 2, the flow of the information acquisition method in the present embodiment
400 highlight pouch area, canthus area and the trigonum of extraction face-image and input pouch area, canthus area and trigonum respectively
To pouch area skin attribute detection submodel, canthus area skin attribute detection submodel and trigonum skin attribute detection submodule
Type, so that it is determined that pouch area, the corresponding skin attribute information in canthus area and trigonum.The side of the present embodiment description as a result,
Case can accurately detect the corresponding skin attribute information of each characteristic area so as to the skin of face attribute of user
Detection it is more careful, accurate.
Please continue to refer to Fig. 5, it illustrates the flows 500 of another embodiment of information acquisition method.The acquisition of information
The flow 500 of method, includes the following steps:
Step 501, the face-image of user is obtained.
This step is identical with step 401 shown in step 201 shown in Fig. 2, Fig. 4, does not repeat herein.
Step 502, the pouch area, canthus area and trigonum of face-image are extracted.
This step is identical with step 402 shown in Fig. 4, does not repeat herein.
Step 503, skin attribute detection model includes pouch area skin attribute detection submodel, the inspection of canthus area skin attribute
Submodel and trigonum skin attribute detection submodel are surveyed, pouch area, canthus area and trigonum are separately input to pouch area skin
Skin detection of attribute submodel, canthus area skin attribute detection submodel and trigonum skin attribute detection submodel, determine pouch
Area, the upper probability value of the corresponding each skin attribute in multiple pre-set skin attributes in canthus area and trigonum.
This step is identical with step 403 shown in Fig. 4, does not repeat herein.
Step 504, for any pre-set skin attribute in multiple pre-set skin attributes, following weighted sum operation is performed:
The weighted sum for calculating pouch area, canthus area and the trigonum probability value on the pre-set skin attribute respectively exists as user's face
Probability value on the pre-set skin attribute.
In the present embodiment, above-mentioned electronic equipment can be in advance to multiple pre-set skin attributes (such as dryness, oiliness, wrinkle
Line, color spot, relaxation, livid ring around eye, pouch) in any pre-set skin attribute set a corresponding preset weights.In step 503
In obtain probability distribution of each characteristic area on above-mentioned multiple pre-set skin attributes after, for multiple pre-set skin attributes
In each pre-set skin attribute perform the operation of following weighted sum:It is pre- at this respectively to calculate pouch area, canthus area and trigonum
If the weighted sum of the probability value on skin attribute.Namely calculate probability value of the pouch area on the pre-set skin attribute and pouch area
Probability value with canthus area corresponding preset weights of the product, canthus area of corresponding preset weights on the pre-set skin attribute multiply
Product and probability value of the trigonum on pre-set skin attribute preset weights corresponding with trigonum products add up and.It is above-mentioned
Electronic equipment can be using probability value of the above-mentioned weighted sum as user's face on the pre-set skin attribute.Pass through above-mentioned weighted sum
Operation, above-mentioned electronic equipment can obtain probability value of the user's face on above-mentioned each pre-set skin attribute.Such as more
A pre-set skin attribute dryness, oiliness, wrinkle, color spot, relaxation, livid ring around eye, pouch, pouch area is in above-mentioned each pre-set skin category
Property on probability value can be 5%, 5%, 10%, 10%, 10%, 20%, 40%;Canthus area is in above-mentioned each pre-set skin category
Property on probability value for example can be 10%, 10%, 30%, 10%, 15%, 20%, 5%;Trigonum is above-mentioned each default
Probability value on skin attribute for example can be 10%, 40%, 15%, 15%, 10%, 5%, 5%.In addition, pouch area, canthus
The corresponding preset weights in area and trigonum can be 35%, 35%, 30%, then user's face is in each pre-set skin attribute
On probability value be respectively 5% × 35%+10% × 35%+10% × 30%, 5% × 35%+10% × 35%+40% ×
30%th, 10% × 35%+30% × 35%+15% × 30%, 10% × 35%+10% × 35%+15% × 30%, 10% ×
35%+15% × 35%+10% × 30%, 20% × 35%+20% × 35%+5% × 30%, 40% × 35%+5% ×
35%+5% × 30%.Namely user's face is in above-mentioned each pre-set skin attribute dryness, oiliness, wrinkle, color spot, relaxation, black eye
It encloses, the probability value on pouch is respectively:8.25%th, 17.25%, 18.50%, 11.50%, 11.75%, 15.50%,
17.25%.
Step 505, the skin category of user's face is determined according to probability value of the user's face on each pre-set skin attribute
Property information.
After probability value of the user's face determined in step 504 on each pre-set skin attribute, above-mentioned electronics
Equipment can determine the skin attribute of user's face according to probability value of the above-mentioned user's face on each pre-set skin attribute
Information.Above-mentioned skin attribute information is used to indicate skin attribute.
It such as can be using the skin attribute of corresponding probability value maximum as the skin attribute of user's face.In step 504
In example in since user's face is in above-mentioned each pre-set skin attribute dryness, oiliness, wrinkle, color spot, relaxation, livid ring around eye, eye
Probability value on bag is respectively 8.25%, 17.25%, 18.50%, 11.50%, 11.75%, 15.50%, 17.25%, then may be used
To determine the numerical value of probability value maximum as 18.5%, and 18.5% corresponding skin attribute is wrinkle, so as to the face of user
Attribute information is the information for indicating " wrinkle ".
From figure 5 it can be seen that compared with the corresponding embodiments of Fig. 4, the flow of the information acquisition method in the present embodiment
500 skin attributes for highlighting feature based region obtain user's face integrally corresponding skin attribute the step of, the present embodiment
The skin attribute that the scheme of description obtains can reflect the problem of user's face skin is more serious.
With further reference to Fig. 6, as the realization to method shown in above-mentioned each figure, this application provides a kind of acquisition of information dresses
The one embodiment put, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied to respectively
In kind electronic equipment.
As shown in fig. 6, the information acquisition device 600 of the present embodiment includes:Acquiring unit 601, extraction unit 602 and determine
Unit 603.Wherein, acquiring unit 601 are configured to obtain the face-image of user;Extraction unit 602 is configured to extract
At least one characteristic area of face-image;Determination unit 603 is configured to for characteristic area to be input to skin trained in advance
Detection of attribute model, determines the skin attribute information of user's face, and skin attribute information is used to indicate skin attribute.
In the present embodiment, the acquiring unit 601 of information acquisition device 600, extraction unit 602 and determination unit 603
Specific processing and its caused technique effect can be respectively with reference to step 201, step 202 and steps 203 in 2 corresponding embodiment of figure
Related description, details are not described herein.
In some optional realization methods of the present embodiment, the characteristic area that extraction unit 602 is extracted is including following
At least one:Pouch area, canthus area and trigonum.In addition, extraction unit 602 can further be configured to examine based on key point
Survey extraction features described above region.
In some optional realization methods of the present embodiment, skin attribute detection model is detected including pouch area skin attribute
Submodel, canthus area skin attribute detection submodel and trigonum skin attribute detection submodel;And determination unit 603 into one
Step is configured to:Pouch area, canthus area and trigonum are separately input to pouch area skin attribute detection submodel, canthus area skin
Skin detection of attribute submodel and trigonum skin attribute detection submodel, determine that pouch area, canthus area and trigonum correspond to respectively
The upper probability value of each pre-set skin attribute in multiple pre-set skin attributes;Pouch area, canthus area are determined according to probability value
With the corresponding skin attribute information in trigonum.
In some optional realization methods of the present embodiment, determination unit 603 is further configured to:For multiple default
Any pre-set skin attribute in skin attribute performs following weighted sum operation:Calculate pouch area, canthus area and trigonum difference
Probability value of the weighted sum of probability value on the pre-set skin attribute as user's face on the pre-set skin attribute;According to
Probability value of the user's face on each pre-set skin attribute determines the skin attribute information of user's face.
In some optional realization methods of the present embodiment, above-mentioned skin attribute detection model can be convolutional neural networks
Model.
In some optional realization methods of the present embodiment, information acquisition device 600 can also include training unit (in figure
It is not shown).Further, training unit may be configured to before the face-image for obtaining user in acquiring unit 601, use
It is multiple divided characteristic area and characteristic area is added to skin attribute mark images to skin attribute detection model into
Row training.By the training using multiple images that skin attribute is added to characteristic area, the parameter in model is constantly adjusted,
So that the output valve of skin attribute detection model is close to mark value.
Below with reference to Fig. 7, it illustrates suitable for being used for realizing the computer system 700 of the server of the embodiment of the present application
Structure diagram.Server shown in Fig. 7 is only an example, should not be to the function of the embodiment of the present application and use scope band
Carry out any restrictions.
As shown in fig. 7, computer system 700 includes central processing unit (CPU, Central Processing Unit)
701, it can be according to the program being stored in read-only memory (ROM, Read Only Memory) 702 or from storage section
706 programs being loaded into random access storage device (RAM, Random Access Memory) 703 and perform it is various appropriate
Action and processing.In RAM 703, also it is stored with system 700 and operates required various programs and data.CPU 701、ROM
702 and RAM 703 is connected with each other by bus 704.Input/output (I/O, Input/Output) interface 705 is also connected to
Bus 704.
I/O interfaces 705 are connected to lower component:Storage section 706 including hard disk etc.;And including such as LAN (locals
Net, Local Area Network) card, modem etc. network interface card communications portion 707.Communications portion 707 passes through
Communication process is performed by the network of such as internet.Driver 708 is also according to needing to be connected to I/O interfaces 705.Detachable media
709, such as disk, CD, magneto-optic disk, semiconductor memory etc., as needed be mounted on driver 708 on, in order to from
The computer program read thereon is mounted into storage section 706 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product, including being carried on computer-readable medium
On computer program, which includes for the program code of the method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 707 and/or from detachable media
709 are mounted.When the computer program is performed by central processing unit (CPU) 701, perform what is limited in the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer readable storage medium either the two arbitrarily combines.Computer readable storage medium for example can be --- but
It is not limited to --- electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor or arbitrary above combination.
The more specific example of computer readable storage medium can include but is not limited to:Electrical connection with one or more conducting wires,
Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer readable storage medium can any be included or store
The tangible medium of program, the program can be commanded the either device use or in connection of execution system, device.And
In the application, computer-readable signal media can include the data letter propagated in a base band or as a carrier wave part
Number, wherein carrying computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by instruction execution system, device either device use or program in connection.It is included on computer-readable medium
Program code any appropriate medium can be used to transmit, including but not limited to:Wirelessly, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
Flow chart and block diagram in attached drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
Architectural framework in the cards, function and the operation of sequence product.In this regard, each box in flow chart or block diagram can generation
The part of one module of table, program segment or code, the part of the module, program segment or code include one or more use
In the executable instruction of logic function as defined in realization.It should also be noted that it in some implementations as replacements, is marked in box
The function of note can also be occurred with being different from the sequence marked in attached drawing.For example, two boxes succeedingly represented are actually
It can perform substantially in parallel, they can also be performed in the opposite order sometimes, this is depended on the functions involved.Also it to note
Meaning, the combination of each box in block diagram and/or flow chart and the box in block diagram and/or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be set in the processor, for example, can be described as:A kind of processor packet
Include acquiring unit, extraction unit and determination unit.Wherein, the title of these units is not formed under certain conditions to the unit
The restriction of itself, for example, acquiring unit is also described as " unit for obtaining the face-image of user ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are performed by the device so that should
Device:Obtain the face-image of user;Extract at least one characteristic area of face-image;Characteristic area is input to advance instruction
Experienced skin attribute detection model, determines the skin attribute information of user's face, and skin attribute information is used to indicate skin attribute.
The preferred embodiment and the explanation to institute's application technology principle that above description is only the application.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to the technology that the specific combination of above-mentioned technical characteristic forms
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
The other technical solutions for arbitrarily combining and being formed.Such as features described above has similar work(with (but not limited to) disclosed herein
The technical solution that the technical characteristic of energy is replaced mutually and formed.
Claims (14)
1. a kind of information acquisition method, including:
Obtain the face-image of user;
Extract at least one characteristic area of the face-image;
The characteristic area is input to skin attribute detection model trained in advance, determines the skin attribute letter of user's face
Breath, the skin attribute information are used to indicate skin attribute.
2. according to the method described in claim 1, wherein, the characteristic area include it is following at least one:
Pouch area, canthus area and trigonum;And
At least one characteristic area of the extraction face-image includes:
At least one characteristic area is extracted based on critical point detection.
3. according to the method described in claim 2, wherein, the skin attribute detection model is detected including pouch area skin attribute
Submodel, canthus area skin attribute detection submodel and trigonum skin attribute detection submodel;And
It is described that the characteristic area is input to skin attribute detection model trained in advance, determine the skin attribute letter of user
Breath, including:
The pouch area, the canthus area and the trigonum are separately input to pouch area skin attribute detection submodel, eye
Angular region skin attribute detection submodel and trigonum skin attribute detection submodel, determine the pouch area, the canthus area and
The upper probability value of the corresponding pre-set skin attribute each in multiple pre-set skin attributes in the trigonum;
The pouch area, the corresponding skin attribute letter in the canthus area and the trigonum are determined according to the probability value
Breath.
4. according to the method described in claim 3, wherein, the skin attribute information of the determining user further includes:
For any pre-set skin attribute in the multiple pre-set skin attribute, following weighted sum operation is performed:Described in calculating
The weighted sum of pouch area, the canthus area and the trigonum probability value on the pre-set skin attribute respectively is as user plane
Probability value of the portion on the pre-set skin attribute;
The skin attribute information of user's face is determined according to probability value of the user's face on each pre-set skin attribute.
5. according to the method described in claim 1, wherein, the skin attribute detection model is convolutional neural networks model.
6. according to the method described in claim 5, wherein, before the face-image for obtaining user, the method is also wrapped
It includes:
Using multiple images for having divided characteristic area and skin attribute mark being added to the characteristic area to the skin
Skin detection of attribute model is trained.
7. one kind is used for information acquisition device, including:
Acquiring unit is configured to obtain the face-image of user;
Extraction unit is configured to extract at least one characteristic area of the face-image;
Determination unit is configured to for the characteristic area to be input to skin attribute detection model trained in advance, determines user
The skin attribute information of face, the skin attribute information are used to indicate skin attribute.
8. device according to claim 7, wherein, the characteristic area include it is following at least one:
Pouch area, canthus area and trigonum;And
The extraction unit is further configured to extract at least one characteristic area based on critical point detection.
9. device according to claim 8, wherein, the skin attribute detection model is detected including pouch area skin attribute
Submodel, canthus area skin attribute detection submodel and trigonum skin attribute detection submodel;And
The determination unit is further configured to:
The pouch area, the canthus area and the trigonum are separately input to pouch area skin attribute detection submodel, eye
Angular region skin attribute detection submodel and trigonum skin attribute detection submodel, determine the pouch area, the canthus area and
The upper probability value of the corresponding pre-set skin attribute each in multiple pre-set skin attributes in the trigonum;
The pouch area, the corresponding skin attribute letter in the canthus area and the trigonum are determined according to the probability value
Breath.
10. device according to claim 9, wherein, the determination unit is further configured to:
For any pre-set skin attribute in the multiple pre-set skin attribute, following weighted sum operation is performed:Described in calculating
The weighted sum of pouch area, the canthus area and the trigonum probability value on the pre-set skin attribute respectively is as user plane
Probability value of the portion on the pre-set skin attribute;
The skin attribute information of user's face is determined according to probability value of the user's face on each pre-set skin attribute.
11. device according to claim 7, wherein, the skin attribute detection model is convolutional neural networks model.
12. according to the devices described in claim 11, wherein, described device further includes training unit,
The training unit is configured to:Before the face-image for obtaining user in the acquiring unit, divided using multiple
Characteristic area and the characteristic area is added to skin attribute mark image the skin attribute detection model is carried out
Training.
13. a kind of server, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are performed by one or more of processors so that one or more of processors are real
The now method as described in any in claim 1-6.
14. a kind of computer readable storage medium, is stored thereon with computer program, wherein, when which is executed by processor
Realize the method as described in any in claim 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810046227.8A CN108171208A (en) | 2018-01-17 | 2018-01-17 | Information acquisition method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810046227.8A CN108171208A (en) | 2018-01-17 | 2018-01-17 | Information acquisition method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108171208A true CN108171208A (en) | 2018-06-15 |
Family
ID=62514658
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810046227.8A Pending CN108171208A (en) | 2018-01-17 | 2018-01-17 | Information acquisition method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108171208A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109008970A (en) * | 2018-07-19 | 2018-12-18 | 上海试美网络科技有限公司 | A kind of skin detection instrument with artificial intelligence |
CN109919029A (en) * | 2019-01-31 | 2019-06-21 | 深圳和而泰数据资源与云技术有限公司 | Black eye kind identification method, device, computer equipment and storage medium |
CN110163171A (en) * | 2019-05-27 | 2019-08-23 | 北京字节跳动网络技术有限公司 | The method and apparatus of face character for identification |
CN110634381A (en) * | 2018-06-25 | 2019-12-31 | 百度在线网络技术(北京)有限公司 | Method and device for determining labeling style |
WO2023152007A1 (en) * | 2022-02-14 | 2023-08-17 | Mvm Enterprises Gmbh | System and method for monitoring human skin |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102968643A (en) * | 2012-11-16 | 2013-03-13 | 华中科技大学 | Multi-mode emotion recognition method based on Lie group theory |
CN104299011A (en) * | 2014-10-13 | 2015-01-21 | 吴亮 | Skin type and skin problem identification and detection method based on facial image identification |
US20160140383A1 (en) * | 2014-11-19 | 2016-05-19 | Samsung Electronics Co., Ltd. | Method and apparatus for extracting facial feature, and method and apparatus for facial recognition |
CN107437073A (en) * | 2017-07-19 | 2017-12-05 | 竹间智能科技(上海)有限公司 | Face skin quality analysis method and system based on deep learning with generation confrontation networking |
-
2018
- 2018-01-17 CN CN201810046227.8A patent/CN108171208A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102968643A (en) * | 2012-11-16 | 2013-03-13 | 华中科技大学 | Multi-mode emotion recognition method based on Lie group theory |
CN104299011A (en) * | 2014-10-13 | 2015-01-21 | 吴亮 | Skin type and skin problem identification and detection method based on facial image identification |
US20160140383A1 (en) * | 2014-11-19 | 2016-05-19 | Samsung Electronics Co., Ltd. | Method and apparatus for extracting facial feature, and method and apparatus for facial recognition |
CN107437073A (en) * | 2017-07-19 | 2017-12-05 | 竹间智能科技(上海)有限公司 | Face skin quality analysis method and system based on deep learning with generation confrontation networking |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110634381A (en) * | 2018-06-25 | 2019-12-31 | 百度在线网络技术(北京)有限公司 | Method and device for determining labeling style |
CN109008970A (en) * | 2018-07-19 | 2018-12-18 | 上海试美网络科技有限公司 | A kind of skin detection instrument with artificial intelligence |
CN109919029A (en) * | 2019-01-31 | 2019-06-21 | 深圳和而泰数据资源与云技术有限公司 | Black eye kind identification method, device, computer equipment and storage medium |
CN110163171A (en) * | 2019-05-27 | 2019-08-23 | 北京字节跳动网络技术有限公司 | The method and apparatus of face character for identification |
CN110163171B (en) * | 2019-05-27 | 2020-07-31 | 北京字节跳动网络技术有限公司 | Method and device for recognizing human face attributes |
WO2023152007A1 (en) * | 2022-02-14 | 2023-08-17 | Mvm Enterprises Gmbh | System and method for monitoring human skin |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108197532B (en) | The method, apparatus and computer installation of recognition of face | |
CN108171208A (en) | Information acquisition method and device | |
CN109816441B (en) | Policy pushing method, system and related device | |
CN108171207A (en) | Face identification method and device based on video sequence | |
CN109508681A (en) | The method and apparatus for generating human body critical point detection model | |
CN107644209A (en) | Method for detecting human face and device | |
CN108197592B (en) | Information acquisition method and device | |
CN109214343A (en) | Method and apparatus for generating face critical point detection model | |
CN109753928A (en) | The recognition methods of architecture against regulations object and device | |
CN108921159A (en) | Method and apparatus for detecting the wear condition of safety cap | |
CN108830235A (en) | Method and apparatus for generating information | |
CN108280477A (en) | Method and apparatus for clustering image | |
CN108494778A (en) | Identity identifying method and device | |
CN109308681A (en) | Image processing method and device | |
CN108197618A (en) | For generating the method and apparatus of Face datection model | |
CN109308490A (en) | Method and apparatus for generating information | |
CN108269254A (en) | Image quality measure method and apparatus | |
CN109389589A (en) | Method and apparatus for statistical number of person | |
CN110348511A (en) | A kind of picture reproduction detection method, system and electronic equipment | |
CN108229344A (en) | Image processing method and device, electronic equipment, computer program and storage medium | |
CN108229485A (en) | For testing the method and apparatus of user interface | |
CN108133197B (en) | Method and apparatus for generating information | |
CN109241934A (en) | Method and apparatus for generating information | |
CN108171204A (en) | Detection method and device | |
CN107729928A (en) | Information acquisition method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180615 |