CN107123027A - A kind of cosmetics based on deep learning recommend method and system - Google Patents
A kind of cosmetics based on deep learning recommend method and system Download PDFInfo
- Publication number
- CN107123027A CN107123027A CN201710294695.2A CN201710294695A CN107123027A CN 107123027 A CN107123027 A CN 107123027A CN 201710294695 A CN201710294695 A CN 201710294695A CN 107123027 A CN107123027 A CN 107123027A
- Authority
- CN
- China
- Prior art keywords
- detected
- image
- feature
- information
- cosmetics
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Item recommendations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Multimedia (AREA)
- General Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- Economics (AREA)
- Development Economics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Recommend method and system the invention discloses a kind of cosmetics based on deep learning, this method includes:The facial image for gathering user to be detected is image to be detected;The characteristics of image of image to be detected is extracted, using characteristics of image as the input of the first nerves network of training in advance, feature description corresponding with characteristics of image is obtained;The measurement information to be checked of feature description and the cosmetics information obtained in advance will be included as the input of the nervus opticus network using deep learning method training in advance, obtain corresponding cosmetics information to recommend cosmetics information.Because the recommendation cosmetics information obtained in above-mentioned technical proposal disclosed in the present application is that the feature description being directed to the facial image of user to be detected is obtained, therefore, it is possible to enable recommendation cosmetics information correspondence cosmetic characteristics to meet the face characteristic of user to be detected, namely recommend cosmetics information correspondence cosmetics to be applied to user to be detected, enable a user to effectively find oneself applicable cosmetics.
Description
Technical field
The present invention relates to mode identification technology, pushed away more specifically to a kind of cosmetics based on deep learning
Recommend method and system.
Background technology
Nowadays, cosmetics are increasingly diversified, and market is also that the good and bad jumbled together;This just makes how people are selecting suitable cosmetic
Problem is occurred in that on product.
More popular cosmetics software typically selects suitable cosmetic for user in the following ways on the market at present
Product:Different user is collected to the evaluation after the usage experience of cosmetics, new user is found and oneself condition (including skin
Skin, age etc.) close other users, evaluation of the other users found to specific products is checked, is selected on this basis
Compare the suitable cosmetics of oneself, or the cosmetics for making new user be bought by the other users found find oneself potential need
The product wanted.But, after all there is certain difference, therefore the cosmetic searched out through the above way in everyone individual instances
Product may not be applicable user.
In summary, how to provide it is a kind of enable to user effectively to find oneself applicable cosmetics, be current ability
Field technique personnel's urgent problem to be solved.
The content of the invention
Recommend method and system it is an object of the invention to provide a kind of cosmetics based on deep learning, to enable a user to
It is enough effectively to find oneself applicable cosmetics.
To achieve these goals, the present invention provides following technical scheme:
A kind of cosmetics based on deep learning recommend method, including:
The facial image for gathering user to be detected is image to be detected;
Extract the characteristics of image of described image to be detected, using described image feature as training in advance first nerves network
Input, obtain the description of corresponding with described image feature feature;
It regard the measurement information to be checked for including the cosmetics information that the feature is described and obtained in advance as utilization depth
The input of the nervus opticus network of learning method training in advance, obtains corresponding cosmetics information to recommend cosmetics information.
It is preferred that, the characteristics of image of described image to be detected is extracted, including:
Human face characteristic point therein is obtained to the detection of described image to be detected based on AAM, and uses feature triangle area
Described image to be detected is extracted based on the human face characteristic point and obtains corresponding geometric properties;
The geometric properties are based on to described image to be detected using triangle center point sampling Gabor texture feature extractions method
Extraction obtains corresponding textural characteristics;
Feature selecting is carried out to the geometric properties and the textural characteristics using Wrapper methods, obtains corresponding
Optimal characteristics are used as characteristics of image.
It is preferred that, before the geometric properties and the textural characteristics are carried out with feature selecting using Wrapper methods,
Also include:
Dimension-reduction treatment is carried out using the PCA geometric properties obtained to extraction and textural characteristics.
It is preferred that, before the characteristics of image for extracting described image to be detected, in addition to:
Described image to be detected is carried out to include facial image enhancing operation and the pretreatment of normalization operation.
It is preferred that, the facial image for gathering user to be detected is image to be detected, including:
The face part of the user to be detected is shot by different angles, corresponding image to be detected is obtained.
It is preferred that, using the measurement information to be checked as before the input of the nervus opticus network, in addition to:
The environmental information of the user present position to be detected is obtained, and the environmental information is added into the letter to be detected
In breath.
It is preferred that, using the measurement information to be checked as before the input of the nervus opticus network, in addition to:
The userspersonal information of user's input to be detected is obtained, and userspersonal information addition is described to be checked
In measurement information.
It is preferred that, using the measurement information to be checked as before the input of the nervus opticus network, in addition to:
Skin care advisory information is obtained, and the skin care advisory information is added in the measurement information to be checked;
It is corresponding, using the measurement information to be checked as after the input of the nervus opticus network, in addition to:
Skin care advisory information corresponding with the feature description, userspersonal information, environmental information is obtained to recommend skin care
It is recommended that.
A kind of cosmetics commending system based on deep learning, including:
Acquisition module, is used for:The facial image for gathering user to be detected is image to be detected;
Extraction module, is used for:The characteristics of image of described image to be detected is extracted, described image feature is regard as training in advance
First nerves network input, obtain the description of corresponding with described image feature feature;
Recommending module, is used for:The measurement information to be checked of the feature description and the cosmetics information obtained in advance will be included
As the input of the nervus opticus network using deep learning method training in advance, the cosmetic with the feature profile matching is obtained
Product information is recommendation cosmetics information.
It is preferred that, the extraction module includes:
Extraction unit, is used for:Human face characteristic point therein is obtained to the detection of described image to be detected based on AAM, and used
Feature triangle area is based on the human face characteristic point and obtains corresponding geometric properties to the extraction of described image to be detected;Using three
Angle center point sampling Gabor texture feature extractions method is based on the geometric properties and obtains corresponding to the extraction of described image to be detected
Textural characteristics;Feature selecting is carried out to the geometric properties and the textural characteristics using Wrapper methods, obtains corresponding
Optimal characteristics are used as characteristics of image.
Recommend method and system the invention provides a kind of cosmetics based on deep learning, wherein this method includes:Adopt
Integrate the facial image of user to be detected as image to be detected;The characteristics of image of described image to be detected is extracted, described image is special
The input of the first nerves network as training in advance is levied, feature description corresponding with described image feature is obtained;It will include
The measurement information to be checked of the feature description and the cosmetics information obtained in advance is used as utilization deep learning method training in advance
The input of nervus opticus network, obtains corresponding cosmetics information to recommend cosmetics information.Technical scheme disclosed in the present application
In, obtaining needs to recommend the facial image of user to be detected of cosmetics to be image to be detected for it, and then by treating for getting
Detection image character pair is described and cosmetics information is as nervus opticus network inputs, obtains the cosmetic with feature profile matching
Product information is recommendation cosmetics information.Because the recommendation cosmetics information obtained in above-mentioned technical proposal disclosed in the present application is pin
Pair obtained with the description of the feature of the facial image of user to be detected, therefore, it is possible to recommend cosmetics information correspondence cosmetics
Feature can meet the face characteristic of user to be detected, namely to recommend cosmetics information correspondence cosmetics to be applied to be detected
User, enables a user to effectively find oneself applicable cosmetics.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
The embodiment of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can also basis
The accompanying drawing of offer obtains other accompanying drawings.
Fig. 1 recommends the flow chart of method for a kind of cosmetics based on deep learning provided in an embodiment of the present invention;
Fig. 2 is AAM systems totality in a kind of cosmetics recommendation method based on deep learning provided in an embodiment of the present invention
Block diagram;
Fig. 3 be a kind of cosmetics recommendation method based on deep learning provided in an embodiment of the present invention in detected by AAM
The human face characteristic point schematic diagram arrived;
Fig. 4 be a kind of cosmetics recommendation method based on deep learning provided in an embodiment of the present invention in use feature triangle
The geometric properties schematic diagram that shape area extraction is arrived;
Fig. 5 be a kind of cosmetics recommendation method based on deep learning provided in an embodiment of the present invention in use triangle center
The textural characteristics schematic diagram that point is extracted using Gabor texture feature extraction methods;
Fig. 6 is Wrapper features in a kind of cosmetics recommendation method based on deep learning provided in an embodiment of the present invention
Select schematic diagram;
Fig. 7 is a kind of structural representation of the cosmetics commending system based on deep learning provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on
Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made
Embodiment, belongs to the scope of protection of the invention.
Referring to Fig. 1, recommending method it illustrates a kind of cosmetics based on deep learning provided in an embodiment of the present invention
Flow chart, can include:
S11:The facial image for gathering user to be detected is image to be detected.
User wherein to be detected is the user for needing to recommend for it cosmetics, and cosmetics can include skin care item and color make-up
It can be realized in product, the application for wherein a certain or whole.Gather the facial image of user to be detected namely obtain
The image of the face part of user to be detected is got, to recommend the cosmetic that it can be applicable on this basis for user to be detected
Product.
S12:The characteristics of image of image to be detected is extracted, characteristics of image is regard as the defeated of the first nerves network of training in advance
Enter, obtain feature description corresponding with characteristics of image.
The characteristics of image of image to be detected is that can characterize the feature of the image to be detected;First nerves network can be pre-
First train, specifically, the training process to first nerves network can include:The image for getting many different users is special
Levy, then mark corresponding feature description for different characteristics of image, and then the characteristics of image for being labeled with feature description is carried out
The Training of feedforward neural network, can mainly use BP algorithm, you can obtain first nerves network.Need explanation
It is that feature description is generally text information, and the text information is generally phrase rank;In addition, in the embodiment of the present invention
The function of one neural fusion can also be by Multimodal CNN (multi-modal convolutional neural networks) model realization, should
Model is used to describe image information comprising an image CNN, and on the one hand matching (matching) CNN is completed in text message
Word semantic structure, the matching relationship between what is more important study image and text.Specifically, Multimodal
The input of CNN models can be to shoot picture of the face picture after simple image preprocessing, and the pretreatment can be with this
The principles such as the normalization operation that application is hereinafter related to are consistent;Correspondence output text information corresponding with picture, namely picture
Feature description.The training process of the Multimodal CNN models is consistent with the training process principle of first nerves network, letter
For list, Training exactly is carried out using the picture for being labeled with text information, be will not be repeated here.Feature description can
To be set according to actual needs, you can to set first nerves network to be output as word description, first can also be set
Neutral net is output as picture signal, within protection scope of the present invention.S13:Feature description will be included and advance
The measurement information to be checked of the cosmetics information of acquisition as using deep learning method training in advance nervus opticus network input,
Corresponding cosmetics information is obtained to recommend cosmetics information.
Using measurement information to be checked as the input of nervus opticus network, the cosmetics information with feature profile matching can be exported
As cosmetics information is recommended, the information matches with feature description, namely matches with the facial image of user to be detected, from
And enable to the corresponding cosmetics of the information to be applied to user to be detected.Wherein nervus opticus network can be instructed in advance
Practice, specifically, the training process to nervus opticus network can include:Get the facial image pair with many different users
The feature description answered, and the cosmetics information with each feature profile matching is determined, and then it is based on spy using the method for deep learning
Levy description, cosmetics information and the training of corresponding matching relationship and obtain nervus opticus network.Wherein nervus opticus network can lead to
The realization of PARALLEL MATCHING framework is crossed, specifically, the framework is one kind of the semantic matches framework based on DCNN.Of this framework
Two words are input to two CNN sentence models respectively with model, their semantic expressiveness (real number value vector) can be obtained.It
The two semantic expressivenesses are input to a multilayer neural network again afterwards, the matching degree of two language justice are judged, so as to judge
Whether two given words can turn into the matching of a pair of sentences to (question and answer to).Here it is the semantic matches mould arranged side by side based on DCNN
The basic thought of type.If substantial amounts of information and reply to data, it is possible to train this model.This model is applied at this
It is then to describe the matching degree with cosmetics information to draw the result of recommendation, namely the feature with input according to feature in invention
Profile matching degree highest cosmetics information.
Wherein cosmetics information can include all on-line off-line different types of cosmetics for meeting national quality standard
All the components, the content of each composition and its corresponding effect, application method etc., these information can be set up to one
Database, is then that cosmetics stick corresponding label according to the use requirement and effect of different parts, such as table 1 to the institute of table 4
Show.
Table 1
Table 2
Table 3
Table 4
In technical scheme disclosed in the present application, obtaining needs to recommend the facial image of user to be detected of cosmetics to be for it
Image to be detected, so image to be detected character pair got description and cosmetics information is defeated as nervus opticus network
Enter, the cosmetics information obtained with feature profile matching is recommendation cosmetics information.Due to above-mentioned technical side disclosed in the present application
The recommendation cosmetics information obtained in case is that the feature description being directed to the facial image of user to be detected is obtained, therefore, it is possible to
Enable and recommend cosmetics information correspondence cosmetic characteristics to meet the face characteristic of user to be detected, namely recommend cosmetics letter
Breath correspondence cosmetics can be applied to user to be detected, enable a user to effectively find oneself applicable cosmetics.
Other above-mentioned technical proposal disclosed in the present application is easy to operate, with strong points, can be provided properly for each user
Cosmetics.
A kind of cosmetics based on deep learning provided in an embodiment of the present invention recommend method, extract the figure of image to be detected
As feature, it can include:
S121:Human face characteristic point therein is obtained to image to be detected detection based on AAM, and uses feature triangle area
Image to be detected is extracted based on human face characteristic point and obtains corresponding geometric properties.
The positioning and detection of human face characteristic point, AAM the general frames are carried out in the application using active appearance model (AAM)
As shown in Fig. 2 its basic thought is the shape and texture information with reference to face, the dynamic apparent model of face is constituted, is recycled
PCA methods describe the motion conditions of the Shape Control Point of characteristic feature point position, and with AAM model instances and input picture poor
Quadratic sum define an energy function, the evaluation function of degree of fitting is used as by the use of the energy function.It is fixed in human face characteristic point
During position, by effective fitting algorithm variation model parameter, so as to control the change in location of Shape Control Point, energy is realized
Function minimization, finally navigates to the human face characteristic point of existing object.Obtained characteristic point is detected by AMM methods, can be such as Fig. 3
It is shown.
In the Extraction of Geometrical Features that face is carried out using feature triangle area, as shown in Figure 4 68 three can be used
The angular area area vector for constituting 68 dimensions after face's gross area normalization as characteristic component, this 68 triangle pairs are answered
The face portion of face outward appearance is influenceed, such as:Eyes size, nose size, chin size, size of mouth etc..68 triangles
Summit come from 58 characteristic points, claim this geometric properties to be characterized a triangle area feature, or abbreviation triangle area is special
Levy.
S122:Image to be detected is carried based on geometric properties using triangle center point sampling Gabor texture feature extractions method
Obtain corresponding textural characteristics.
, can be with as shown in figure 5, choosing has different characteristic point structure in the point sampling Gabor texture feature extraction methods of triangle center
Into 68 triangles central point position sampled point, to extract the Gabor textural characteristics of face.This feature extracting method is called
Triangle center point sampling Gabor texture feature extraction methods, resulting feature is referred to as triangle center point sampling Gabor characteristic,
It is abbreviated as Triangle Center Gabor.
S123:Feature selecting is carried out to geometric properties and textural characteristics using Wrapper methods, obtains corresponding optimal
Feature is used as characteristics of image.
Because the feature among feature set is incoherent a bit, some are redundancies, these uncorrelated and redundancy letters
Breath agrees that the performance of machine learning algorithm can be influenceed, so need to remove these redundancies and incoherent feature from feature set, this
It is exactly the select permeability of character subset.The present invention is using there is supervision Wrapper methods, in Wrapper methods, character subset choosing
Select by one is carried out as the inductive algorithm of black box, be absorbed in the interaction of training set and inductive algorithm, as shown in fig. 6, simultaneously by
Eliminating characteristic value it is to optimize the assessment measurement of sorting algorithm in Wrapper methods, so general than the methods such as Filter tool
There is better performance, it can be selected most in the case where considering learning algorithm (Induction Algorithm) from feature set
Excellent character subset.Specifically, Wrapper methods take into account learning algorithm during optimal feature subset is searched for
It is whether optimal come the character subset of evaluating search.In terms of Wrapper feature selecting algorithms mainly include following three:(1) feature
Selection search (Feature Selection):Carrying out feature selecting search needs state space (States Space), one
Individual original state (Initial Status), an end condition (Termination Condition) and a search mechanisms
(Search Engine).Search space can so be organized, each one character subset of status representative.For n feature, often
Individual state just has n-bit (bit), and each bit (bit) represents that this feature retains (present) and still given up (absent).
If retained, the bit is designated as 1, is otherwise designated as 0.Once-through operation is used as from a state addition or one feature of deletion
(operator).For n feature, the size of search space is O (2n), and so big search space thinks that it is not all to have searched for
Correspond to reality, unless n very littles.So, optimal preferential (Best-first) search mechanisms can be selected, since empty feature set
Search for (search forward) forward.End condition is 5 backtrackings (backtracking), depending on search mechanisms.(2) it is special
Levy evaluation (Feature Evaluation):Because not knowing that the true discrimination after training is how many, estimated with accuracy rate
Meter (accuracy estimation) is used as heuristic function (heuristic function) and evaluation function (evaluation
function).Accuracy rate method of estimation can use cross-validation method (cross-validation), repeat many to each feature
It is secondary.The standard deviation that the number of times repeated is estimated by accuracy rate is determined.(3) learning algorithm (Induction Algorithm):The present invention
The learning algorithm used in feature selecting environment is regression support vector machine (SVM).The data of learning algorithm operation, generally fall into
Two parts a, part is used to train, and another part is used to test.In training, different features is given up from feature set every time
Abandon, then calculate the study precision of prediction of regression support vector machine, the character subset for possessing the highest prediction degree of accuracy is just selected
It is used as optimal feature subset.The optimal feature subset gone out by this process choosing will be evaluated in independent test data.It is right
Answer, be then, by obtained geometric properties and textural characteristics progress feature selecting, finally to obtain in the application using Wrapper methods
To the optimal feature subsets of optimal characteristics is included as characteristics of image, so that recommendation results are more accurate.
In above-mentioned technical proposal disclosed in the present application, the positioning and detection of human face characteristic point are carried out using AAM, make use of can
Energy function in varying model optimizes thought, and face shape is considered incessantly, introduce face it is apparent come extended model sign energy
Power, can realize accurate facial modeling.Characteristics of image is obtained using Wrapper methods, it is with high-class precision
For index, searching can reach the characteristic quantity of highest nicety of grading, be particularly suitable for finding the optimal feature subset of special algorithm;From
And ensure that the accuracy of recommendation.
It should be noted that it is characterized in Gabor characteristic to be used to characterize face texture in the application.Wherein come from spatial domain
See, Gabor filter may be considered a sinusoidal plane wave by Gaussian FUNCTION MODULATIONs.Two-dimensional Gabor after simplification
Wave filter is defined as:
X '=xcos θ+ysin θ, y '=- xsin θ+ycos θ
(x, y) is spatial domain location of pixels coordinate in formula, and ω (ω=2 π f) is radial center frequency (yardstick), and θ is
The direction direction of plane wave (i.e. sinusoidal) of Gabor filter, σ is Gaussian functions along x-axis and the mean square deviation of y-axis.Variances sigma
It can be expressed as with the relation of frequencies omega:
It is the bandwidth (the bandwidth in octaves) of octave, typically takes 1~1.5.
Image I (x, y) Gabor characteristic is the convolution to image I (x, y) and Gabor filter ψ (x, y, ω, θ):
OM, n(x, y)=I (x, y) * ψ (x, y, ω, θ)
No. * is convolution operator in formula.The convolution of image is output as plural form, can generally take the real,
Imaginary part or modulus value are as characteristic vector, and the Gabor textures that the modulus value of plural number can be taken herein, in the application to extract are special
Levy.
And the geometric properties extracted are the characteristic feature of face, such as 3 D stereo knot of face skin, face
Whether structure, skin are smooth color and luster, on the face with the presence or absence of wrinkle, scar, knurl, color spot etc., and these factors are all the weights of people's face skin
Want feature.Face is present with point, line or plane form, but the form of 3 D stereo is present;The sinking degree of eyes,
Length, width and the depth of nose, the sinking degree of cheek these information all affect the three-dimensional stereopsis of face;Therefore, need
The feature of three-dimensional shape information and skin information can be expressed.Appearance features are exactly that can express the feature of these information;
The three-dimensional information that face skin, shade change and comparison of light and shade is reflected can be captured, so that it is guaranteed that choosing
The cosmetics gone out are the cosmetics suitable for user to be detected.
The characteristics of image collected can be set up to a face database for belonging to the user in addition, its terminal is stored in
And high in the clouds is uploaded, so as to obtain at any time.
A kind of cosmetics based on deep learning provided in an embodiment of the present invention recommend method, using Wrapper methods pair
Geometric properties and textural characteristics are carried out before feature selecting, can also be included:
Dimension-reduction treatment is carried out using the PCA geometric properties obtained to extraction and textural characteristics.
Because obtained geometric properties and textural characteristics are possible for dimension very big feature, now using principal component analysis
(PCA) dimension-reduction treatment is carried out to it, enables to subsequent treatment easier.
A kind of cosmetics based on deep learning provided in an embodiment of the present invention recommend method, extract the figure of image to be detected
As that before feature, can also include:
Image to be detected is carried out to include facial image enhancing operation and the pretreatment of normalization operation.
Wherein, get after image to be detected, because image to be detected gathers the difference of environment, the bright-dark degree of such as illumination
And the quality of equipment performance etc., it is often noisy to exist.In order to ensure the accuracy of Face datection, it is necessary to by pretreatment
Information unrelated in image is removed, interference, noise is filtered out, recovers useful information, strengthens detectability for information about, simultaneously
Simplify data to greatest extent, so as to improve the reliability of institute's extraction feature.The picture pretreatment of the present invention can mainly include people
Face image enhancing operation (i.e. denoising), normalization operation (such as unitary of illumination, size dimension normalization, direction of rotation
Normalization etc.) etc. work;Wherein facial image enhancing operation is to improve the quality of facial image, make facial image in vision
On become apparent from, more conducively post-processing with identification, and image enhancement technique mainly include space domain method and frequency domain method, space
Domain method is the purpose for handling, reaching removal or decrease noise direct to image pixel in the spatial domain, wherein representative
Algorithm include mean filter and medium filtering;Frequency domain method carries out frequency-domain transform to image first, and then each spectrum component is entered
Row corresponding operating, eventually passes result needed for frequency domain inverse transformation is obtained.Normalized target is to obtain size unanimously, gray scale value
Scope identical standardized face's image, in order to post-processing.
A kind of cosmetics based on deep learning provided in an embodiment of the present invention recommend method, gather the people of user to be detected
Face image is image to be detected, can be included:
The face part of user to be detected is shot by different angles, corresponding image to be detected is obtained.
The image to be detected wherein obtained can include the front photograph of user to be detected, shine sideways and photograph etc. of bowing, and be
Cause image accurately and reliably, it is exposed and be not added with any trim that user to be detected should reveal volume, face when shooting;So as to ensure
The accuracy that cosmetics are recommended.
A kind of cosmetics based on deep learning provided in an embodiment of the present invention recommend method, regard measurement information to be checked as the
Before the input of two neutral nets, it can also include:
The environmental information of user present position to be detected is obtained, and environmental information is added in measurement information to be checked.
It can be got automatically from high in the clouds or according to current meteorological observatory corresponding with the location of user to be detected
The external environmental informations such as season, weather (including temperature, humidity, wind direction, wind-force, solar radiation intensity, uitraviolet intensity etc.), also
A database can be set up, so that as measurement information to be checked so that the recommendation cosmetics information of output also with ring
Environment information correspondence, further ensures the applicability for recommending cosmetics information.It is corresponding, also needed to during training nervus opticus network
Different environmental information is added in the data for training nervus opticus network and environmental information is corresponding with cosmetics information
Relation.
A kind of cosmetics based on deep learning provided in an embodiment of the present invention recommend method, regard measurement information to be checked as the
Before the input of two neutral nets, it can also include:
The userspersonal information of user's input to be detected is obtained, and userspersonal information is added in measurement information to be checked.
Wherein userspersonal information can include the age, sex, personal preference, allergies (to prevent to certain cosmetics into
Point allergy), self-induction skin condition (greasy/to dry), occasion, personal income and the price to skin protection cosmetics that will attend
It is required that etc., a database can also be set up, so that as measurement information to be checked so that the recommendation cosmetics of output
Information is also corresponding with userspersonal information so that the cosmetics of recommendation more meet user's requirement, more hommization.It is corresponding, instruction
Also need to add in the data for training nervus opticus network when practicing nervus opticus network different userspersonal information and
Userspersonal information and the corresponding relation of cosmetics information.
A kind of cosmetics based on deep learning provided in an embodiment of the present invention recommend method, regard measurement information to be checked as the
Before the input of two neutral nets, it can also include:
Skin care advisory information is obtained, and skin care advisory information is added in measurement information to be checked;
It is corresponding, using measurement information to be checked as after the input of nervus opticus network, in addition to:
Skin care advisory information corresponding with feature description, userspersonal information, environmental information is obtained to recommend skin care suggestion.
Wherein skin care advisory information can include being directed to different skin quality, the different time in different times and one day
Section, should take skin which kind of skin care measure, including sun-proof wearing measure, eating habit, massage or motor behavior etc., also
A database can be set up, so that as measurement information to be checked so that nervus opticus network also exports recommendation skin care
It is recommended that, be conducive to the daily skin care of user.It is corresponding, also needed to during training nervus opticus network for training nervus opticus net
Different skin care advisory information and skin care advisory information and feature description, userspersonal information, environment letter are added in the data of network
The corresponding relation of breath.
It can be seen that, above-mentioned technical proposal disclosed in the present application can realize the responsibility of a beautician, user is stayed indoors
The skin care suggestion of specialty can be got.And the cosmetics of the application recommendation and skin care suggestion are all made to measure for user,
With strong points, relevance grade is high.In addition, various cosmetics informations, environmental information and the skin care advisory information of the present invention can be certainly
It is dynamic to be loaded from high in the clouds, userspersonal information is inputted when user uses first, can shoot user's when using each time backward
Live-pictures simultaneously require application requirements that user's input will be attended etc., so that using deep learning method to a large amount of collections
To data be trained, provide the user comprehensive, objective, efficient, science decision references.
The part consistent with corresponding to technical scheme principle in the prior art be simultaneously in technical scheme provided in an embodiment of the present invention
It is unspecified, in order to avoid excessively repeat.
The embodiment of the present invention additionally provides a kind of cosmetics commending system based on deep learning, as shown in fig. 7, can wrap
Include:
Acquisition module 11, is used for:The facial image for gathering user to be detected is image to be detected;
Extraction module 12, is used for:The characteristics of image of image to be detected is extracted, characteristics of image is regard as the first of training in advance
The input of neutral net, obtains feature description corresponding with characteristics of image;
Recommending module 13, is used for:The measurement information to be checked for including feature description and the cosmetics information obtained in advance is made
Using the input of the nervus opticus network of deep learning method training in advance, to obtain the cosmetics information with feature profile matching
To recommend cosmetics information.
A kind of cosmetics commending system based on deep learning provided in an embodiment of the present invention, extraction module can include:
Extraction unit, is used for:Human face characteristic point therein is obtained to image to be detected detection based on AAM, and uses feature
Triangle area is based on human face characteristic point and obtains corresponding geometric properties to image to be detected extraction;Using triangle center point sampling
Gabor texture feature extractions method is based on geometric properties and obtains corresponding textural characteristics to image to be detected extraction;Using Wrapper
Method carries out feature selecting to geometric properties and textural characteristics, obtains corresponding optimal characteristics as characteristics of image.
A kind of cosmetics commending system based on deep learning provided in an embodiment of the present invention, can also include:
Dimensionality reduction module, is used for:Before geometric properties and textural characteristics are carried out with feature selecting using Wrapper methods,
Dimension-reduction treatment is carried out using the PCA geometric properties obtained to extraction and textural characteristics.
A kind of cosmetics commending system based on deep learning provided in an embodiment of the present invention, can also include:
Pretreatment module, is used for:Before the characteristics of image for extracting image to be detected, image to be detected is carried out to include face
The pretreatment of image enhancement operation and normalization operation.
A kind of cosmetics commending system based on deep learning provided in an embodiment of the present invention, acquisition module can include:
Collecting unit, is used for:The face part of user to be detected is shot by different angles, corresponding treat is obtained
Detection image.
A kind of cosmetics commending system based on deep learning provided in an embodiment of the present invention, can also include:
First acquisition module, is used for:Using measurement information to be checked as before the input of nervus opticus network, obtaining to be detected use
The environmental information of family present position, and environmental information is added in measurement information to be checked.
A kind of cosmetics commending system based on deep learning provided in an embodiment of the present invention, can also include:
Second acquisition module, is used for:Using measurement information to be checked as before the input of nervus opticus network, obtaining to be detected use
The userspersonal information of family input, and userspersonal information is added in measurement information to be checked.
A kind of cosmetics commending system based on deep learning provided in an embodiment of the present invention, can also include:
3rd acquisition module, is used for:Using measurement information to be checked as before the input of nervus opticus network, obtaining skin care suggestion
Information, and skin care advisory information is added in measurement information to be checked;
It is corresponding, it can also include:
Suggestion module, is used for:Using measurement information to be checked as after the input of nervus opticus network, obtain with feature description,
Userspersonal information, the corresponding skin care advisory information of environmental information are recommendation skin care suggestion.
The explanation of relevant portion please in a kind of cosmetics commending system based on deep learning provided in an embodiment of the present invention
Referring to the detailed description of corresponding part in a kind of cosmetics recommendation method based on deep learning provided in an embodiment of the present invention,
This is repeated no more.
The foregoing description of the disclosed embodiments, enables those skilled in the art to realize or using the present invention.To this
A variety of modifications of a little embodiments will be apparent for a person skilled in the art, and generic principles defined herein can
Without departing from the spirit or scope of the present invention, to realize in other embodiments.Therefore, the present invention will not be limited
It is formed on the embodiments shown herein, and is to fit to consistent with features of novelty with principles disclosed herein most wide
Scope.
Claims (10)
1. a kind of cosmetics based on deep learning recommend method, it is characterised in that including:
The facial image for gathering user to be detected is image to be detected;
The characteristics of image of described image to be detected is extracted, described image feature is regard as the defeated of the first nerves network of training in advance
Enter, obtain feature description corresponding with described image feature;
It regard the measurement information to be checked for including the cosmetics information that the feature is described and obtained in advance as utilization deep learning side
The input of the nervus opticus network of method training in advance, obtains corresponding cosmetics information to recommend cosmetics information.
2. according to the method described in claim 1, it is characterised in that extract the characteristics of image of described image to be detected, including:
Human face characteristic point therein is obtained to the detection of described image to be detected based on AAM, and is based on using feature triangle area
The human face characteristic point extracts to described image to be detected and obtains corresponding geometric properties;
Described image to be detected is extracted based on the geometric properties using triangle center point sampling Gabor texture feature extractions method
Obtain corresponding textural characteristics;
Feature selecting is carried out to the geometric properties and the textural characteristics using Wrapper methods, obtains corresponding optimal
Feature is used as characteristics of image.
3. method according to claim 2, it is characterised in that using Wrapper methods to geometric properties and described
Textural characteristics are carried out before feature selecting, in addition to:
Dimension-reduction treatment is carried out using the PCA geometric properties obtained to extraction and textural characteristics.
4. method according to claim 2, it is characterised in that before the characteristics of image for extracting described image to be detected, also
Including:
Described image to be detected is carried out to include facial image enhancing operation and the pretreatment of normalization operation.
5. method according to claim 4, it is characterised in that the facial image of collection user to be detected is mapping to be checked
Picture, including:
The face part of the user to be detected is shot by different angles, corresponding image to be detected is obtained.
6. according to the method described in claim 1, it is characterised in that regard the measurement information to be checked as the nervus opticus network
Input before, in addition to:
The environmental information of the user present position to be detected is obtained, and the environmental information is added into the measurement information to be checked
In.
7. method according to claim 6, it is characterised in that regard the measurement information to be checked as the nervus opticus network
Input before, in addition to:
The userspersonal information of user's input to be detected is obtained, and the userspersonal information is added into the letter to be detected
In breath.
8. method according to claim 7, it is characterised in that regard the measurement information to be checked as the nervus opticus network
Input before, in addition to:
Skin care advisory information is obtained, and the skin care advisory information is added in the measurement information to be checked;
It is corresponding, using the measurement information to be checked as after the input of the nervus opticus network, in addition to:
Skin care advisory information corresponding with the feature description, userspersonal information, environmental information is obtained to recommend skin care suggestion.
9. a kind of cosmetics commending system based on deep learning, it is characterised in that including:
Acquisition module, is used for:The facial image for gathering user to be detected is image to be detected;
Extraction module, is used for:The characteristics of image of described image to be detected is extracted, described image feature is regard as the of training in advance
The input of one neutral net, obtains feature description corresponding with described image feature;
Recommending module, is used for:To include the feature describe and the measurement information to be checked of cosmetics information that obtains in advance as
Using the input of the nervus opticus network of deep learning method training in advance, the cosmetics letter with the feature profile matching is obtained
Cease to recommend cosmetics information.
10. system according to claim 9, it is characterised in that the extraction module includes:
Extraction unit, is used for:Human face characteristic point therein is obtained to the detection of described image to be detected based on AAM, and uses feature
Triangle area is based on the human face characteristic point and obtains corresponding geometric properties to the extraction of described image to be detected;Using in triangle
Heart point sampling Gabor texture feature extractions method is based on the geometric properties and obtains corresponding texture to the extraction of described image to be detected
Feature;Feature selecting is carried out to the geometric properties and the textural characteristics using Wrapper methods, obtains corresponding optimal
Feature is used as characteristics of image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710294695.2A CN107123027B (en) | 2017-04-28 | 2017-04-28 | Deep learning-based cosmetic recommendation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710294695.2A CN107123027B (en) | 2017-04-28 | 2017-04-28 | Deep learning-based cosmetic recommendation method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107123027A true CN107123027A (en) | 2017-09-01 |
CN107123027B CN107123027B (en) | 2021-06-01 |
Family
ID=59725947
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710294695.2A Expired - Fee Related CN107123027B (en) | 2017-04-28 | 2017-04-28 | Deep learning-based cosmetic recommendation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107123027B (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107944093A (en) * | 2017-11-02 | 2018-04-20 | 广东数相智能科技有限公司 | A kind of lipstick color matching system of selection, electronic equipment and storage medium |
CN108229415A (en) * | 2018-01-17 | 2018-06-29 | 广东欧珀移动通信有限公司 | Information recommendation method, device, electronic equipment and computer readable storage medium |
CN108399619A (en) * | 2017-12-22 | 2018-08-14 | 联想(北京)有限公司 | The system and device of medical diagnosis |
CN109410313A (en) * | 2018-02-28 | 2019-03-01 | 南京恩瑞特实业有限公司 | A kind of meteorology three-dimensional information 3D simulation inversion method |
CN109784281A (en) * | 2019-01-18 | 2019-05-21 | 深圳壹账通智能科技有限公司 | Products Show method, apparatus and computer equipment based on face characteristic |
CN109840825A (en) * | 2017-11-29 | 2019-06-04 | 迪特技术公司 | The recommender system of physical features based on user |
CN110033344A (en) * | 2019-03-06 | 2019-07-19 | 百度在线网络技术(北京)有限公司 | Skin care item recommended method, device and storage medium neural network based |
CN110245590A (en) * | 2019-05-29 | 2019-09-17 | 广东技术师范大学 | A kind of Products Show method and system based on skin image detection |
CN110399560A (en) * | 2019-07-30 | 2019-11-01 | 厦门美图之家科技有限公司 | Skin care information recommendation method, device, equipment and storage medium |
CN111064766A (en) * | 2019-10-24 | 2020-04-24 | 青岛海尔科技有限公司 | Information pushing method and device based on Internet of things operating system and storage medium |
CN111414554A (en) * | 2020-03-26 | 2020-07-14 | 透明生活(武汉)信息科技有限公司 | Commodity recommendation method, system, server and storage medium |
CN111868742A (en) * | 2018-01-05 | 2020-10-30 | 莱雅公司 | Machine implemented facial health and beauty aid |
CN112396573A (en) * | 2019-07-30 | 2021-02-23 | 纵横在线(广州)网络科技有限公司 | Facial skin analysis method and system based on image recognition |
CN112889065A (en) * | 2018-10-25 | 2021-06-01 | 莱雅公司 | System and method for providing personalized product recommendations using deep learning |
CN113222712A (en) * | 2021-05-31 | 2021-08-06 | 中国银行股份有限公司 | Product recommendation method and device |
CN113538114A (en) * | 2021-09-13 | 2021-10-22 | 东莞市疾病预防控制中心 | Mask recommendation platform and method based on small programs |
CN114502061A (en) * | 2018-12-04 | 2022-05-13 | 巴黎欧莱雅 | Image-based automatic skin diagnosis using deep learning |
CN117197541A (en) * | 2023-08-17 | 2023-12-08 | 广州兴趣岛信息科技有限公司 | User classification method and system based on convolutional neural network |
US20230401320A1 (en) * | 2022-06-10 | 2023-12-14 | Microsoft Technology Licensing, Llc | Generic feature extraction for identifying malicious packages |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120011041A1 (en) * | 2010-07-06 | 2012-01-12 | Beydler Michael L | Post bankruptcy pattern and transaction detection and recovery apparatus and method |
CN103544506A (en) * | 2013-10-12 | 2014-01-29 | Tcl集团股份有限公司 | Method and device for classifying images on basis of convolutional neural network |
CN104217225A (en) * | 2014-09-02 | 2014-12-17 | 中国科学院自动化研究所 | A visual target detection and labeling method |
US20150325025A1 (en) * | 2011-08-11 | 2015-11-12 | Samsung Electronics Co., Ltd. | Image processing apparatus, method of processing image, and computer-readable storage medium |
CN105160312A (en) * | 2015-08-27 | 2015-12-16 | 南京信息工程大学 | Recommendation method for star face make up based on facial similarity match |
CN105183841A (en) * | 2015-09-06 | 2015-12-23 | 南京游族信息技术有限公司 | Recommendation method in combination with frequent item set and deep learning under big data environment |
CN105455522A (en) * | 2015-11-30 | 2016-04-06 | 深圳市欧蒙设计有限公司 | Intelligent cosmetic mirror |
CN106446782A (en) * | 2016-08-29 | 2017-02-22 | 北京小米移动软件有限公司 | Image identification method and device |
CN106529394A (en) * | 2016-09-19 | 2017-03-22 | 广东工业大学 | Indoor scene and object simultaneous recognition and modeling method |
CN106568783A (en) * | 2016-11-08 | 2017-04-19 | 广东工业大学 | Hardware part defect detecting system and method |
-
2017
- 2017-04-28 CN CN201710294695.2A patent/CN107123027B/en not_active Expired - Fee Related
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120011041A1 (en) * | 2010-07-06 | 2012-01-12 | Beydler Michael L | Post bankruptcy pattern and transaction detection and recovery apparatus and method |
US20150325025A1 (en) * | 2011-08-11 | 2015-11-12 | Samsung Electronics Co., Ltd. | Image processing apparatus, method of processing image, and computer-readable storage medium |
CN103544506A (en) * | 2013-10-12 | 2014-01-29 | Tcl集团股份有限公司 | Method and device for classifying images on basis of convolutional neural network |
CN104217225A (en) * | 2014-09-02 | 2014-12-17 | 中国科学院自动化研究所 | A visual target detection and labeling method |
CN105160312A (en) * | 2015-08-27 | 2015-12-16 | 南京信息工程大学 | Recommendation method for star face make up based on facial similarity match |
CN105183841A (en) * | 2015-09-06 | 2015-12-23 | 南京游族信息技术有限公司 | Recommendation method in combination with frequent item set and deep learning under big data environment |
CN105455522A (en) * | 2015-11-30 | 2016-04-06 | 深圳市欧蒙设计有限公司 | Intelligent cosmetic mirror |
CN106446782A (en) * | 2016-08-29 | 2017-02-22 | 北京小米移动软件有限公司 | Image identification method and device |
CN106529394A (en) * | 2016-09-19 | 2017-03-22 | 广东工业大学 | Indoor scene and object simultaneous recognition and modeling method |
CN106568783A (en) * | 2016-11-08 | 2017-04-19 | 广东工业大学 | Hardware part defect detecting system and method |
Non-Patent Citations (4)
Title |
---|
MICHAEL R.SMITH 等: "A hybrid latent variable neural network model for item recommendation", 《2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS(IJCNN)》 * |
刘启华: "《泛在商务环境下的信息聚合与推荐》", 30 November 2014, 复旦大学出版社 * |
刘杨涛 等: "基于嵌入式向量和循环神经网络的用户行为预测方法", 《现代电子技术》 * |
毛慧芸: "人脸美丽吸引力的特征分析与机器学习", 《中国博士学位论文全文数据库 信息科技辑》 * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107944093A (en) * | 2017-11-02 | 2018-04-20 | 广东数相智能科技有限公司 | A kind of lipstick color matching system of selection, electronic equipment and storage medium |
CN109359317A (en) * | 2017-11-02 | 2019-02-19 | 广东数相智能科技有限公司 | A kind of lipstick is matched colors the model building method and lipstick color matching selection method of selection |
US11157985B2 (en) | 2017-11-29 | 2021-10-26 | Ditto Technologies, Inc. | Recommendation system, method and computer program product based on a user's physical features |
US12118602B2 (en) | 2017-11-29 | 2024-10-15 | Ditto Technologies, Inc. | Recommendation system, method and computer program product based on a user's physical features |
CN109840825A (en) * | 2017-11-29 | 2019-06-04 | 迪特技术公司 | The recommender system of physical features based on user |
CN108399619B (en) * | 2017-12-22 | 2021-12-24 | 联想(北京)有限公司 | System and device for medical diagnosis |
CN108399619A (en) * | 2017-12-22 | 2018-08-14 | 联想(北京)有限公司 | The system and device of medical diagnosis |
CN111868742A (en) * | 2018-01-05 | 2020-10-30 | 莱雅公司 | Machine implemented facial health and beauty aid |
CN108229415B (en) * | 2018-01-17 | 2020-12-22 | Oppo广东移动通信有限公司 | Information recommendation method and device, electronic equipment and computer-readable storage medium |
CN108229415A (en) * | 2018-01-17 | 2018-06-29 | 广东欧珀移动通信有限公司 | Information recommendation method, device, electronic equipment and computer readable storage medium |
CN109410313B (en) * | 2018-02-28 | 2023-03-24 | 南京恩瑞特实业有限公司 | Meteorological three-dimensional information 3D simulation inversion method |
CN109410313A (en) * | 2018-02-28 | 2019-03-01 | 南京恩瑞特实业有限公司 | A kind of meteorology three-dimensional information 3D simulation inversion method |
CN112889065A (en) * | 2018-10-25 | 2021-06-01 | 莱雅公司 | System and method for providing personalized product recommendations using deep learning |
CN114502061B (en) * | 2018-12-04 | 2024-05-28 | 巴黎欧莱雅 | Image-based automatic skin diagnosis using deep learning |
CN114502061A (en) * | 2018-12-04 | 2022-05-13 | 巴黎欧莱雅 | Image-based automatic skin diagnosis using deep learning |
CN109784281A (en) * | 2019-01-18 | 2019-05-21 | 深圳壹账通智能科技有限公司 | Products Show method, apparatus and computer equipment based on face characteristic |
CN110033344A (en) * | 2019-03-06 | 2019-07-19 | 百度在线网络技术(北京)有限公司 | Skin care item recommended method, device and storage medium neural network based |
CN110245590A (en) * | 2019-05-29 | 2019-09-17 | 广东技术师范大学 | A kind of Products Show method and system based on skin image detection |
CN110245590B (en) * | 2019-05-29 | 2023-04-28 | 广东技术师范大学 | Product recommendation method and system based on skin image detection |
CN110399560A (en) * | 2019-07-30 | 2019-11-01 | 厦门美图之家科技有限公司 | Skin care information recommendation method, device, equipment and storage medium |
CN112396573A (en) * | 2019-07-30 | 2021-02-23 | 纵横在线(广州)网络科技有限公司 | Facial skin analysis method and system based on image recognition |
CN111064766A (en) * | 2019-10-24 | 2020-04-24 | 青岛海尔科技有限公司 | Information pushing method and device based on Internet of things operating system and storage medium |
CN111414554A (en) * | 2020-03-26 | 2020-07-14 | 透明生活(武汉)信息科技有限公司 | Commodity recommendation method, system, server and storage medium |
CN111414554B (en) * | 2020-03-26 | 2023-08-22 | 透明生活(武汉)信息科技有限公司 | Commodity recommendation method, commodity recommendation system, server and storage medium |
CN113222712A (en) * | 2021-05-31 | 2021-08-06 | 中国银行股份有限公司 | Product recommendation method and device |
CN113538114B (en) * | 2021-09-13 | 2022-03-04 | 东莞市疾病预防控制中心 | Mask recommendation platform and method based on small programs |
CN113538114A (en) * | 2021-09-13 | 2021-10-22 | 东莞市疾病预防控制中心 | Mask recommendation platform and method based on small programs |
US20230401320A1 (en) * | 2022-06-10 | 2023-12-14 | Microsoft Technology Licensing, Llc | Generic feature extraction for identifying malicious packages |
CN117197541B (en) * | 2023-08-17 | 2024-04-30 | 广州兴趣岛信息科技有限公司 | User classification method and system based on convolutional neural network |
CN117197541A (en) * | 2023-08-17 | 2023-12-08 | 广州兴趣岛信息科技有限公司 | User classification method and system based on convolutional neural network |
Also Published As
Publication number | Publication date |
---|---|
CN107123027B (en) | 2021-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107123027A (en) | A kind of cosmetics based on deep learning recommend method and system | |
Yuan et al. | Fingerprint liveness detection using an improved CNN with image scale equalization | |
CN104143079B (en) | The method and system of face character identification | |
CN109815826A (en) | The generation method and device of face character model | |
CN104850825A (en) | Facial image face score calculating method based on convolutional neural network | |
CN109345494B (en) | Image fusion method and device based on potential low-rank representation and structure tensor | |
CN105469076B (en) | Face alignment verification method based on multi-instance learning | |
CN107679546A (en) | Face image data acquisition method, device, terminal device and storage medium | |
CN105740808B (en) | Face identification method and device | |
KR20100134533A (en) | An iris and ocular recognition system using trace transforms | |
CN102136024A (en) | Biometric feature identification performance assessment and diagnosis optimizing system | |
CN106778489A (en) | The method for building up and equipment of face 3D characteristic identity information banks | |
CN107911643A (en) | Show the method and apparatus of scene special effect in a kind of video communication | |
Song et al. | Local-to-global mesh saliency | |
Guo et al. | Multifeature extracting CNN with concatenation for image denoising | |
CN111950362A (en) | Golden monkey face image identification method, device, equipment and storage medium | |
Liu | Human face expression recognition based on deep learning-deep convolutional neural network | |
Sharma et al. | An improved technique for face age progression and enhanced super-resolution with generative adversarial networks | |
Kumari et al. | An optimal feature enriched region of interest (ROI) extraction for periocular biometric system | |
Kuang | Face image feature extraction based on deep learning algorithm | |
Pang et al. | Dance video motion recognition based on computer vision and image processing | |
Virtusio et al. | Enabling artistic control over pattern density and stroke strength | |
CN109117716A (en) | A kind of makings similarity acquisition methods and device | |
CN103942545A (en) | Method and device for identifying faces based on bidirectional compressed data space dimension reduction | |
CN116884045B (en) | Identity recognition method, identity recognition device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210601 |