CN112800884A - Intelligent auxiliary method based on cosmetic mirror - Google Patents

Intelligent auxiliary method based on cosmetic mirror Download PDF

Info

Publication number
CN112800884A
CN112800884A CN202110056906.5A CN202110056906A CN112800884A CN 112800884 A CN112800884 A CN 112800884A CN 202110056906 A CN202110056906 A CN 202110056906A CN 112800884 A CN112800884 A CN 112800884A
Authority
CN
China
Prior art keywords
image
user
feature information
facial
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110056906.5A
Other languages
Chinese (zh)
Other versions
CN112800884B (en
Inventor
王星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xinhai Chuangda Technology Co ltd
Original Assignee
Shenzhen Xinhai Chuangda Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xinhai Chuangda Technology Co ltd filed Critical Shenzhen Xinhai Chuangda Technology Co ltd
Priority to CN202110056906.5A priority Critical patent/CN112800884B/en
Priority claimed from CN202110056906.5A external-priority patent/CN112800884B/en
Publication of CN112800884A publication Critical patent/CN112800884A/en
Application granted granted Critical
Publication of CN112800884B publication Critical patent/CN112800884B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47GHOUSEHOLD OR TABLE EQUIPMENT
    • A47G1/00Mirrors; Picture frames or the like, e.g. provided with heating, lighting or ventilating means
    • A47G1/02Mirrors used as equipment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The invention provides an intelligent auxiliary method based on a cosmetic mirror, which comprises the following steps: receiving a selection instruction of a user, carrying out instruction analysis on the selection instruction, and acquiring an analysis result; determining a working mode of the cosmetic mirror based on the analysis result; acquiring a face image of the user, and extracting feature information of the face image; based on the working mode of the cosmetic mirror and the feature information of the facial image of the user, assisting in recommending wearing and making up images for the user to select; the working mode of the cosmetic mirror is determined through the selection instruction of the user, so that the wearing and makeup images are provided for the user through the cosmetic mirror according to the feature information of the facial image of the user, the makeup and wearing time of the user is saved, and the using efficiency of the cosmetic mirror is improved.

Description

Intelligent auxiliary method based on cosmetic mirror
Technical Field
The invention relates to the technical field of cosmetic mirrors, in particular to an intelligent auxiliary method based on a cosmetic mirror.
Background
At present, with the development of life, more and more people begin to pay attention to their own spiritual features, so in order to improve the appearance and enhance the quality of qi, the cosmetic mirror becomes a necessity of household life;
however, the ordinary cosmetic mirror can not meet the living needs of people, some cosmetic mirrors only have the ordinary backlight source, the function is single, the function of accurately providing putting on and dressing recommendation for a user is not provided, a lot of time is wasted when the user puts on the cosmetic, and the use efficiency of the cosmetic mirror is low, so that the intelligent auxiliary method based on the cosmetic mirror is provided.
Disclosure of Invention
The invention provides an intelligent auxiliary method based on a cosmetic mirror, which is used for accurately providing wearing and makeup images for a user through the selection of a functional mode of the cosmetic mirror and facial feature information of the user.
An intelligent assistance method based on a cosmetic mirror, comprising:
receiving a selection instruction of a user, carrying out instruction analysis on the selection instruction, and acquiring an analysis result;
determining a working mode of the cosmetic mirror based on the analysis result;
acquiring a face image of the user, and extracting feature information of the face image;
and based on the working mode of the cosmetic mirror and the feature information of the facial image of the user, assisting in recommending wearing and making up images for the user to select.
Preferably, after the facial image of the user is acquired and before the feature information of the facial image is extracted, the method further includes:
determining the illumination intensity of the current mirror reflection of the makeup based on the facial image, and storing the illumination intensity into a database;
substituting the illumination intensity into a regression equation to obtain a comprehensive illumination intensity value;
comparing the comprehensive illumination intensity value with an illumination threshold value prestored in the database;
when the illumination intensity is smaller than the preset illumination threshold value, controlling the illumination intensity of the cosmetic mirror surface to increase until the illumination intensity of the cosmetic mirror surface meets the preset illumination threshold value;
and when the illumination intensity is greater than the preset illumination threshold value, controlling the illumination intensity of the makeup mirror surface to be reduced until the illumination intensity of the makeup mirror surface meets the preset illumination threshold value.
Preferably, the intelligent auxiliary method based on a cosmetic mirror performs a specific work process of instruction analysis on the selection instruction, and includes:
locating feature data of the selection instruction, and classifying the feature data based on the type of the selection instruction;
establishing a data linked list based on the classification result of the characteristic data, and storing the selection instruction in a data linked list node;
acquiring node coordinates of the selection instruction in the data linked list nodes;
matching the byte suffixes corresponding to the node coordinates with a preset suffix dictionary chain table, and acquiring the maximum repeated suffix of the byte suffixes in the preset suffix dictionary chain table;
determining a prefix of the select instruction based on the suffix of the select instruction using the maximal repeat suffix as a suffix of the select instruction;
determining a key statement of the selection instruction based on a suffix and a prefix of the selection instruction;
and carrying out binary processing on the key statement of the selection instruction to obtain a final analysis result.
Preferably, the intelligent auxiliary method based on the cosmetic mirror determines a specific working process of a working mode of the cosmetic mirror based on the analysis result, and comprises the following steps:
acquiring target data corresponding to the analysis result and determining a mode data set corresponding to a working mode of the cosmetic mirror;
obtaining mode data keywords of all data in the mode data set;
numbering the data of the pattern data set based on the data keywords and a specific numbering rule to obtain a pattern data sequence;
acquiring a target data keyword based on the target data;
acquiring a corresponding relation between the target data keyword and the pattern data sequence based on the pattern data sequence;
and determining the working mode of the cosmetic mirror according to the corresponding relation.
Preferably, the method for intelligently assisting based on a cosmetic mirror, based on the pattern data sequence, acquiring a correspondence between the target data keyword and the pattern data sequence, includes:
acquiring target field information of the target data keywords and mode field information of the keywords of the mode data corresponding to the target data, wherein the target field information and the mode field information have a mapping relation;
and matching the target data and the pattern data based on the mapping relation and a preset matching rule, and if the matching is successful, binding the target data and the pattern data to acquire a corresponding relation.
Preferably, the work process of extracting the feature information of the facial image based on the intelligent auxiliary method of the cosmetic mirror comprises the following steps:
acquiring a facial image of a user, and constructing a training model;
wherein, the training model comprises a deep convolutional neural network;
inputting the facial image of the user into the training model, detecting the obtained facial image of the user, and judging whether the facial image of the user is a complete facial image;
if the facial image of the user is an incomplete image, the facial image of the user is obtained again;
otherwise, segmenting the facial image of the user according to a preset elliptical region to obtain N sub-images of the facial image of the user;
wherein N represents the number of the sub-images;
extracting sub-facial feature information in N sub-images of the facial image of the user to obtain a sub-facial feature information set of the facial image of the user;
acquiring a reference sub-facial feature information set of a preset target facial image, and extracting reference sub-facial feature information corresponding to the sub-facial feature information set of the facial image of the user from the reference sub-facial feature information set of the preset target facial image to serve as a reference sub-facial feature information set;
verifying the reference sub-facial feature information in the reference sub-facial feature information set and the sub-facial feature information in the sub-facial feature information set one by one;
if the reference sub-facial feature information in the reference sub-facial feature information set corresponds to the corresponding sub-facial feature information in the sub-facial feature information set one by one, performing fusion processing on the sub-facial feature information in the sub-facial feature information set to obtain the facial feature information of the facial image of the user;
otherwise, the facial image of the user is segmented again, and the sub-facial feature information in the segmented sub-images is extracted until the reference sub-facial feature information in the reference sub-facial feature information set corresponds to the corresponding sub-facial feature information in the sub-facial feature information set one by one.
Preferably, the intelligent auxiliary method based on the cosmetic mirror comprises the following working modes: a night mode and a day mode;
wherein, the first working parameter corresponding to the night working mode comprises: displaying the climate and temperature of the next day, and a reference clothes wearing and reference makeup image pushed according to the climate and the temperature on the basis of the cosmetic mirror;
the second working parameters corresponding to the daytime working mode comprise: and displaying the weather and the temperature of the day, and clothes wearing and makeup images pushed according to the weather and the temperature based on the cosmetic mirror.
Preferably, the intelligent assisting method based on a cosmetic mirror further includes, after providing a wearing and makeup image for the user based on the working mode of the cosmetic mirror and the feature information of the facial image of the user:
the method comprises the steps of obtaining a face image of a user, carrying out filtering processing on the face image of the user, matching the face image subjected to filtering processing with a makeup image, calculating the matching degree, checking the calculated matching degree to determine the accuracy degree of a working mode of the makeup mirror, wherein the specific working process comprises the following steps:
performing Gaussian filtering processing on the face image according to a following filtering function formula to obtain a standard face image;
Figure BDA0002901120740000051
wherein x represents a pixel value of the face image; sigma1Representing a high frequency gain of the facial image during filtering; sigma2Representing a low frequency gain of the face image during Gaussian filtering; k represents a pixel reflection component of the face image; f represents the filtering frequency in the process of Gaussian filtering, mu represents the filtering coefficient, and the value range of mu is (1.2 x 10)-2,0.8*102) (ii) a t represents a filtering time required in the gaussian filtering process; m represents a gaussian variance;
graying the standard face image and the makeup image, matching the standard face image and the makeup image based on graying processing, and calculating the matching degree of the standard face image and the makeup image;
Figure BDA0002901120740000052
wherein d represents a matching degree of the standard face image and the makeup image; delta represents a matching factor and has a value in the range of (0.2 x 10)-3,0.6*10-2) (ii) a i represents the number of pixel points of the standard facial image or the number of pixel points of the makeup image, the number of the pixel points of the standard facial image and the number of the pixel points of the makeup image are equal, xiRepresenting the pixel value of the ith pixel point in the standard face image; y isiRepresenting the pixel value of the standard face image at the ith pixel point; h is1To representGradient values of the standard face image; h is2A gradient value representing the makeup image; g1A gradation value representing the standard face image; g2A gray value representing the makeup image; ζ represents the matching factor, and its value range is (1.2 × 10)-6,0.6*10-6);
Verifying the matching degree of the standard face image and the makeup image, and determining the accuracy of the working mode of the makeup mirror according to a preset verification standard;
if the matching degree is 10% to 40%, the matching degree does not accord with the verification rule, the accuracy of the working mode of the cosmetic mirror is low, and meanwhile, a cosmetic image is provided for the user again;
if the matching degree is 40% to 70%, the matching degree accords with the verification rule, the accuracy of the working mode of the cosmetic mirror is medium, and meanwhile, the user makes up according to the cosmetic image or provides the cosmetic image again;
if the matching degree is 70% to 99%, the matching degree accords with the verification rule, the accuracy of the working mode of the cosmetic mirror is high, and the cosmetic image is a final use cosmetic image.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of an intelligent assistance method based on a cosmetic mirror according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example 1:
the invention provides an intelligent auxiliary method based on a cosmetic mirror, which comprises the following steps of:
step 1: receiving a selection instruction of a user, carrying out instruction analysis on the selection instruction, and acquiring an analysis result;
step 2: determining a working mode of the cosmetic mirror based on the analysis result;
and step 3: acquiring a face image of the user, and extracting feature information of the face image;
and 4, step 4: and based on the working mode of the cosmetic mirror and the feature information of the facial image of the user, assisting in recommending wearing and making up images for the user to select.
In this embodiment, the operation modes of the cosmetic mirror include: a night mode and a day mode;
wherein, the first working parameter that night mode of operation corresponds includes: displaying the climate and temperature of the next day, and a reference clothes wearing and reference makeup image pushed according to the climate and temperature based on the makeup mirror;
the second working parameters corresponding to the daytime working mode comprise: and displaying the weather and the temperature of the day, and clothes wearing and makeup images pushed according to the weather and the temperature based on the cosmetic mirror.
In this embodiment, the feature information may be the face of the user such as the size of the eyes, the distance between the eyes, the width of the forehead, and the like.
The beneficial effects of the above technical scheme are:
the working mode of the cosmetic mirror is determined through the selection instruction of the user, so that the wearing and makeup images are provided for the user through the cosmetic mirror according to the feature information of the facial image of the user, the makeup and wearing time of the user is saved, and the using efficiency of the cosmetic mirror is improved.
Example 2:
on the basis of embodiment 1, the invention provides an intelligent assisting method based on a cosmetic mirror, which, after acquiring a facial image of a user and before extracting feature information of the facial image, further comprises:
determining the illumination intensity of the current mirror reflection of the makeup based on the facial image, and storing the illumination intensity into a database;
substituting the illumination intensity into a regression equation to obtain a comprehensive illumination intensity value;
comparing the comprehensive illumination intensity value with an illumination threshold value prestored in the database;
when the illumination intensity is smaller than the preset illumination threshold value, controlling the illumination intensity of the cosmetic mirror surface to increase until the illumination intensity of the cosmetic mirror surface meets the preset illumination threshold value;
when the illumination intensity is greater than the preset illumination threshold value, controlling the illumination intensity of the cosmetic mirror surface to be reduced until the illumination intensity of the cosmetic mirror surface meets the preset illumination threshold value
In this embodiment, the regression equation may be a linear regression equation, and the relationship between the face information and the illumination intensity is determined by using a regression analysis method, so as to obtain the comprehensive illumination intensity value.
In this embodiment, the integrated illumination intensity value is determined based on the external illumination intensity, the illumination intensity of the specular reflection, and the intensity of light received by the concave-convex part of the face, where the external illumination intensity may be the sun illumination or the indoor illumination intensity, or may be the combination of the sun illumination and the indoor illumination intensity.
In this embodiment, the illumination threshold may be the user's optimal illumination intensity obtained through machine learning.
The beneficial effects of the above technical scheme are:
the light intensity reflected by the cosmetic mirror surface is beneficial to accurately determining the comprehensive light intensity value, and the comprehensive light intensity value is adjusted to the light threshold value, so that the user can keep constant light intensity in using the cosmetic mirror, and good cosmetic experience is brought to the user.
Example 3:
on the basis of embodiment 1, the invention provides an intelligent auxiliary method based on a cosmetic mirror, which is used for carrying out a specific working process of instruction analysis on the selection instruction, and comprises the following steps:
locating feature data of the selection instruction, and classifying the feature data based on the type of the selection instruction;
establishing a data linked list based on the classification result of the characteristic data, and storing the selection instruction in a data linked list node;
acquiring node coordinates of the selection instruction in the data linked list nodes;
matching the byte suffixes corresponding to the node coordinates with a preset suffix dictionary chain table, and acquiring the maximum repeated suffix of the byte suffixes in the preset suffix dictionary chain table;
determining a prefix of the select instruction based on the suffix of the select instruction using the maximal repeat suffix as a suffix of the select instruction;
determining a key statement of the selection instruction based on a suffix and a prefix of the selection instruction;
and carrying out binary processing on the key sentence of the selection instruction to obtain a final analysis result which can be read by the cosmetic mirror.
In this embodiment, the feature data of the selection instruction includes: the instruction operation code field data and the address code field data of the instruction are selected.
In this embodiment, the instruction types include: a transfer instruction, a fixed point arithmetic operation instruction, a control branch type, etc.
In this embodiment, the data linked list may be a non-sequential, non-sequential storage structure on the physical storage structure, and the logical order of the feature data storage is implemented by the order of the reference links in the linked list.
In this embodiment, the maximum repeated suffix in the suffix dictionary linked list is obtained to determine the suffix of the selection instruction, so that the selection instruction key statement is more unique.
In this embodiment, the key sentence is binary processed to digitize and unify the key sentence, which is more favorable for accurate reading of the cosmetic mirror.
The beneficial effects of the above technical scheme are:
the method comprises the steps of positioning and classifying feature data of a selection instruction to accurately establish a data linked list, obtaining a prefix and a suffix of the selection instruction through the data linked list to lock a key sentence of the selection instruction, and performing binary processing on the key sentence to enable the cosmetic mirror to read more accurately to determine the function of the cosmetic mirror.
Example 4:
on the basis of embodiment 1, the invention provides an intelligent auxiliary method based on a cosmetic mirror, which determines a specific working process of a working mode of the cosmetic mirror based on an analysis result, and comprises the following steps:
acquiring target data corresponding to the analysis result and determining a mode data set corresponding to a working mode of the cosmetic mirror;
obtaining mode data keywords of all data in the mode data set;
numbering the data of the pattern data set based on the data keywords and a specific numbering rule to obtain a pattern data sequence;
acquiring a target data keyword based on the target data;
acquiring a corresponding relation between the target data keyword and the pattern data sequence based on the pattern data sequence;
and determining the working mode of the cosmetic mirror according to the corresponding relation.
In this embodiment, the pattern data set may be, for example, the climate, the air temperature, the color of the clothing worn, the warmth, and the lipstick number of makeup, etc.; wherein, the climate may be that, when the climate is clear, the data is defined as 00; when the climate is negative, the data is defined as 01; when the climate is rain, the data is defined as 10; when the climate is snow, the data is defined as 11.
In this embodiment, the keywords of the pattern data may be data keywords only for climate, data keywords only for wearing, or data keywords only for makeup.
In this embodiment, the specific numbering rule may be determined according to the UTF8 encoding rule.
The beneficial effects of the above technical scheme are:
by acquiring the mode data keywords in the mode data set, the mode data in the mode data set can be numbered, and according to the numbered mode data sequence, the corresponding relation between the mode data sequence and the keywords in the target data can be established, so that the working mode of the cosmetic mirror can be determined according to the corresponding relation, and the working accuracy of the cosmetic mirror is improved.
Example 5:
on the basis of embodiment 4, the invention provides an intelligent auxiliary method based on a cosmetic mirror,
based on the pattern data sequence, acquiring a corresponding relation between the target data keyword and the pattern data sequence, including:
target field information of the target data keywords and mode field information of the keywords of the mode data corresponding to the target data, wherein the target field information and the mode field information have a mapping relation;
and matching the target data and the pattern data based on the mapping relation and a preset matching rule, and if the matching is successful, binding the target data and the pattern data to acquire a corresponding relation.
In this embodiment, the preset matching rule may be a regular expression-based matching rule.
In this embodiment, the binding of the target data and the mode data is to establish a connection process between the selection command and the working mode of the cosmetic mirror, and when the data is changed, the bound data element automatically reflects the change.
The beneficial effects of the above technical scheme are:
the target data and the pattern data are matched through the preset matching rule, so that the target data and the pattern data are bound, the corresponding relation is accurately obtained, and the data matching accuracy is improved.
Example 6:
on the basis of embodiment 1, the invention provides an intelligent auxiliary method based on a cosmetic mirror, which is a working process for extracting feature information of a face image, and comprises the following steps:
acquiring a facial image of a user, and constructing a training model;
wherein, the training model comprises a deep convolutional neural network;
inputting the facial image of the user into the training model, detecting the obtained facial image of the user, and judging whether the facial image of the user is a complete facial image;
if the facial image of the user is an incomplete image, the facial image of the user is obtained again;
otherwise, segmenting the facial image of the user according to a preset elliptical region to obtain N sub-images of the facial image of the user;
wherein N represents the number of the sub-images;
extracting sub-facial feature information in N sub-images of the facial image of the user to obtain a sub-facial feature information set of the facial image of the user;
acquiring a reference sub-facial feature information set of a preset target facial image, and extracting reference sub-facial feature information corresponding to the sub-facial feature information set of the facial image of the user from the reference sub-facial feature information set of the preset target facial image to serve as a reference sub-facial feature information set;
verifying the reference sub-facial feature information in the reference sub-facial feature information set and the sub-facial feature information in the sub-facial feature information set one by one;
if the reference sub-facial feature information in the reference sub-facial feature information set corresponds to the corresponding sub-facial feature information in the sub-facial feature information set one by one, performing fusion processing on the sub-facial feature information in the sub-facial feature information set to obtain the facial feature information of the facial image of the user;
otherwise, the facial image of the user is segmented again, and the sub-facial feature information in the segmented sub-images is extracted until the reference sub-facial feature information in the reference sub-facial feature information set corresponds to the corresponding sub-facial feature information in the sub-facial feature information set one by one.
In this embodiment, the training model may be constructed from parameters appropriate for the facial image and a convolutional neural network.
In this embodiment, the preset elliptical area is for fitting to the face image of the user, and the elliptical area is determined according to the size of the face of the user.
In this embodiment, the sub-facial feature information may be specific to each of the five sense organs, such as feature information of the eyes, feature information of the nose, feature information of the mouth, feature information of the face, and the like.
In this embodiment, the preset target face image is set according to the facial features of the user, and the preset target face image is changeable, by acquiring different face data, and accurately calculating and storing the best makeup data of the different face data, when the user inputs a face image, the preset target face image of the user can be acquired based on big data.
In this embodiment, the fusion processing may be to fuse the sub-facial feature information by a hyperspectral image and a single-band image resampling method with high spatial resolution.
The beneficial effects of the above technical scheme are:
the facial image of the user is trained and detected to accurately acquire the complete facial image of the user, the preset elliptical region is based on to cater to the facial contour of the user, so that the facial image can be segmented more favorably, the sub-facial feature sets are acquired, one-to-one verification is carried out on the reference sub-facial feature sets, and the sub-facial feature sets corresponding to one-to-one are fused, so that the facial image information of the user can be accurately acquired.
Example 7:
on the basis of embodiment 1, the invention provides an intelligent makeup assisting method based on a cosmetic mirror, which is used for providing wearing and makeup images for a user based on a working mode of the cosmetic mirror and characteristic information of a face image of the user, and further comprises the following steps:
the method comprises the steps of obtaining a face image of a user, carrying out filtering processing on the face image of the user, matching the face image subjected to Gaussian filtering processing with a makeup image, calculating the matching degree, and meanwhile, verifying the calculated matching degree to determine the accuracy degree of a working mode of the makeup mirror, wherein the specific working process comprises the following steps:
performing Gaussian filtering processing on the face image according to a following filtering function formula to obtain a standard face image;
Figure BDA0002901120740000131
wherein x represents a pixel value of the face image; sigma1Representing a high frequency gain of the facial image during filtering; sigma2Representing a low frequency gain of the face image during Gaussian filtering; k represents a pixel reflection component of the face image; f represents the filtering frequency in the process of Gaussian filtering, mu represents the filtering coefficient, and the value range of mu is (1.2 x 10)-2,0.8*10-2) (ii) a t represents a filtering time required in the gaussian filtering process; m represents a gaussian variance;
graying the standard face image and the makeup image, matching the standard face image and the makeup image based on graying processing, and calculating the matching degree of the standard face image and the makeup image;
Figure BDA0002901120740000132
wherein d represents a matching degree of the standard face image and the makeup image; delta meterShowing a matching factor and having a value range of (0.2 x 10)-3,0.6*10-2) (ii) a i represents the number of pixel points of the standard facial image or the number of pixel points of the makeup image, the number of the pixel points of the standard facial image and the number of the pixel points of the makeup image are equal, xiRepresenting the pixel value of the ith pixel point in the standard face image; y isiRepresenting the pixel value of the standard face image at the ith pixel point; h is1Gradient values representing the standard face image; h is2A gradient value representing the makeup image; g1A gradation value representing the standard face image; g2A gray value representing the makeup image; ζ represents the matching factor, and its value range is (1.2 × 10)-6,0.6*10-6);
Verifying the matching degree of the standard face image and the makeup image, and determining the accuracy of the working mode of the makeup mirror according to a preset verification standard;
if the matching degree is 10% to 40%, the matching degree does not accord with the verification rule, the accuracy of the working mode of the cosmetic mirror is low, and meanwhile, a cosmetic image is provided for the user again;
if the matching degree is 40% to 70%, the matching degree accords with the verification rule, the accuracy of the working mode of the cosmetic mirror is medium, and meanwhile, the user makes up according to the cosmetic image or provides the cosmetic image again;
and if the matching degree is 70% to 99%, the matching degree accords with the verification rule, the accuracy of the working mode of the cosmetic mirror is high, and the cosmetic image is the final use cosmetic image.
In this embodiment, the standard face image may be an image in which, after gaussian filtering, the face image does not have any interference factor, which may be noise or the like, referred to as a standard face image.
In this embodiment, the pixel reflection component may be a light threading value of the light source reflecting directly into the eye through the user's facial surface.
In this embodiment, the gaussian variance may be a statistic between pixels of the face image of the user, the statistic exhibiting a positive correlation when the value of the gaussian variance is positive, and exhibiting a negative correlation when the value of the gaussian variance is negative.
In this embodiment, the check rule may be to determine whether the matching degree reaches 40%, and if the matching degree reaches 40%, the result is qualified.
The beneficial effects of the above technical scheme are:
by acquiring the facial image of the user and carrying out filtering processing, the matching degree of the standard facial image and the makeup image can be acquired accurately, the matching degree can be judged in the preset matching rule, the accuracy of the working mode of the cosmetic mirror can be acquired, the most reasonable facial makeup recommendation can be provided for the user, and the use effectiveness can be improved.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. An intelligent assisting method based on a cosmetic mirror is characterized by comprising the following steps:
receiving a selection instruction of a user, carrying out instruction analysis on the selection instruction, and acquiring an analysis result;
determining a working mode of the cosmetic mirror based on the analysis result;
acquiring a face image of the user, and extracting feature information of the face image;
and based on the working mode of the cosmetic mirror and the feature information of the facial image of the user, assisting in recommending wearing and making up images for the user to select.
2. The intelligent cosmetic mirror-based assisting method according to claim 1, wherein after acquiring the facial image of the user and before extracting feature information of the facial image, the method further comprises:
determining the illumination intensity of the current mirror reflection of the makeup based on the facial image, and storing the illumination intensity into a database;
substituting the illumination intensity into a regression equation to obtain a comprehensive illumination intensity value;
comparing the comprehensive illumination intensity value with an illumination threshold value prestored in the database;
when the illumination intensity is smaller than the preset illumination threshold value, controlling the illumination intensity of the cosmetic mirror surface to increase until the illumination intensity of the cosmetic mirror surface meets the preset illumination threshold value;
and when the illumination intensity is greater than the preset illumination threshold value, controlling the illumination intensity of the makeup mirror surface to be reduced until the illumination intensity of the makeup mirror surface meets the preset illumination threshold value.
3. The intelligent cosmetic mirror-based auxiliary method according to claim 1, wherein the specific work process of performing instruction analysis on the selection instruction comprises:
locating feature data of the selection instruction, and classifying the feature data based on the type of the selection instruction;
establishing a data linked list based on the classification result of the characteristic data, and storing the selection instruction in a data linked list node;
acquiring node coordinates of the selection instruction in the data linked list nodes;
matching the byte suffixes corresponding to the node coordinates with a preset suffix dictionary chain table, and acquiring the maximum repeated suffix of the byte suffixes in the preset suffix dictionary chain table;
determining a prefix of the select instruction based on the suffix of the select instruction using the maximal repeat suffix as a suffix of the select instruction;
determining a key statement of the selection instruction based on a suffix and a prefix of the selection instruction;
and carrying out binary processing on the key statement of the selection instruction to obtain a final analysis result.
4. The intelligent auxiliary method based on the cosmetic mirror is characterized in that the specific working process of the working mode of the cosmetic mirror is determined based on the analysis result, and comprises the following steps:
acquiring target data corresponding to the analysis result and determining a mode data set corresponding to a working mode of the cosmetic mirror;
obtaining mode data keywords of all data in the mode data set;
numbering the data of the pattern data set based on the data keywords and a specific numbering rule to obtain a pattern data sequence;
acquiring a target data keyword based on the target data;
acquiring a corresponding relation between the target data keyword and the pattern data sequence based on the pattern data sequence;
and determining the working mode of the cosmetic mirror according to the corresponding relation.
5. The intelligent cosmetic mirror-based assistant method according to claim 4,
based on the pattern data sequence, acquiring a corresponding relation between the target data keyword and the pattern data sequence, including:
acquiring target field information of the target data keywords and mode field information of the keywords of the mode data corresponding to the target data, wherein the target field information and the mode field information have a mapping relation;
and matching the target data and the pattern data based on the mapping relation and a preset matching rule, and if the matching is successful, binding the target data and the pattern data to acquire a corresponding relation.
6. The intelligent cosmetic mirror-based assistant method according to claim 1, wherein the work process of extracting the feature information of the facial image comprises:
acquiring a facial image of a user, and constructing a training model;
wherein, the training model comprises a deep convolutional neural network;
inputting the facial image of the user into the training model, detecting the obtained facial image of the user, and judging whether the facial image of the user is a complete facial image;
if the facial image of the user is an incomplete image, the facial image of the user is obtained again;
otherwise, segmenting the facial image of the user according to a preset elliptical region to obtain N sub-images of the facial image of the user;
wherein N represents the number of the sub-images;
extracting sub-facial feature information in N sub-images of the facial image of the user to obtain a sub-facial feature information set of the facial image of the user;
acquiring a reference sub-facial feature information set of a preset target facial image, and extracting reference sub-facial feature information corresponding to the sub-facial feature information set of the facial image of the user from the reference sub-facial feature information set of the preset target facial image to serve as a reference sub-facial feature information set;
verifying the reference sub-facial feature information in the reference sub-facial feature information set and the sub-facial feature information in the sub-facial feature information set one by one;
if the reference sub-facial feature information in the reference sub-facial feature information set corresponds to the corresponding sub-facial feature information in the sub-facial feature information set one by one, performing fusion processing on the sub-facial feature information in the sub-facial feature information set to obtain the facial feature information of the facial image of the user;
otherwise, the facial image of the user is segmented again, and the sub-facial feature information in the segmented sub-images is extracted until the reference sub-facial feature information in the reference sub-facial feature information set corresponds to the corresponding sub-facial feature information in the sub-facial feature information set one by one.
7. Intelligent makeup assisting method based on cosmetic mirrors according to claim 1, characterized in that,
the working modes of the cosmetic mirror comprise: a night mode and a day mode;
wherein, the first working parameter corresponding to the night working mode comprises: displaying the climate and temperature of the next day, and a reference clothes wearing and reference makeup image pushed according to the climate and the temperature on the basis of the cosmetic mirror;
the second working parameters corresponding to the daytime working mode comprise: and displaying the weather and the temperature of the day, and clothes wearing and makeup images pushed according to the weather and the temperature based on the cosmetic mirror.
8. The intelligent assisting method based on cosmetic mirror as claimed in claim 1, wherein after providing the wearing and makeup image for the user based on the working mode of the cosmetic mirror and the feature information of the facial image of the user, further comprising:
the method comprises the steps of obtaining a face image of a user, carrying out filtering processing on the face image of the user, matching the face image subjected to filtering processing with a makeup image, calculating the matching degree, checking the calculated matching degree to determine the accuracy degree of a working mode of the makeup mirror, wherein the specific working process comprises the following steps:
performing Gaussian filtering processing on the face image according to a following filtering function formula to obtain a standard face image;
Figure FDA0002901120730000041
wherein x represents a pixel value of the face image; sigma1Representing the height of the face image during filteringFrequency gain; sigma2Representing a low frequency gain of the face image during Gaussian filtering; k represents a pixel reflection component of the face image; f represents the filtering frequency in the process of Gaussian filtering, mu represents the filtering coefficient, and the value range of mu is (1.2 x 10)-2,0.8*10-2) (ii) a t represents a filtering time required in the gaussian filtering process; m represents a gaussian variance;
graying the standard face image and the makeup image, matching the standard face image and the makeup image based on graying processing, and calculating the matching degree of the standard face image and the makeup image;
Figure FDA0002901120730000042
wherein d represents a matching degree of the standard face image and the makeup image; delta represents a matching factor and has a value in the range of (0.2 x 10)-3,0.6*10-2) (ii) a i represents the number of pixel points of the standard facial image or the number of pixel points of the makeup image, the number of the pixel points of the standard facial image and the number of the pixel points of the makeup image are equal, xiRepresenting the pixel value of the ith pixel point in the standard face image; y isiRepresenting the pixel value of the ith pixel point in the standard face image; h is1Gradient values representing the standard face image; h is2A gradient value representing the makeup image; g1A gradation value representing the standard face image; g2A gray value representing the makeup image; ζ represents the matching factor, and its value range is (1.2 × 10)--6,0.6*10-6);
Verifying the matching degree of the standard face image and the makeup image, and determining the accuracy of the working mode of the makeup mirror according to a preset verification standard;
if the matching degree is 10% to 40%, the matching degree does not accord with the verification rule, the accuracy of the working mode of the cosmetic mirror is low, and meanwhile, a cosmetic image is provided for the user again;
if the matching degree is 40% to 70%, the matching degree accords with the verification rule, the accuracy of the working mode of the cosmetic mirror is medium, and meanwhile, the user makes up according to the cosmetic image or provides the cosmetic image again;
if the matching degree is 70% to 99%, the matching degree accords with the verification rule, the accuracy of the working mode of the cosmetic mirror is high, and the cosmetic image is a final use cosmetic image.
CN202110056906.5A 2021-01-15 Intelligent auxiliary method based on cosmetic mirror Active CN112800884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110056906.5A CN112800884B (en) 2021-01-15 Intelligent auxiliary method based on cosmetic mirror

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110056906.5A CN112800884B (en) 2021-01-15 Intelligent auxiliary method based on cosmetic mirror

Publications (2)

Publication Number Publication Date
CN112800884A true CN112800884A (en) 2021-05-14
CN112800884B CN112800884B (en) 2024-04-30

Family

ID=

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114340079A (en) * 2021-12-29 2022-04-12 苏州鑫凯威科创发展有限公司 Cosmetic mirror with lamp and self-adaptive light projection method thereof
CN117596741A (en) * 2023-12-08 2024-02-23 东莞莱姆森科技建材有限公司 Intelligent mirror control method and system capable of automatically adjusting light rays

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106108523A (en) * 2016-08-22 2016-11-16 陈云 A kind of portable intelligent cosmetic mirror, cosmetic aid system and method
CN108154121A (en) * 2017-12-25 2018-06-12 深圳市美丽控电子商务有限公司 Cosmetic auxiliary method, smart mirror and storage medium based on smart mirror
CN109671142A (en) * 2018-11-23 2019-04-23 南京图玩智能科技有限公司 A kind of intelligence makeups method and intelligent makeups mirror
CN109784281A (en) * 2019-01-18 2019-05-21 深圳壹账通智能科技有限公司 Products Show method, apparatus and computer equipment based on face characteristic
CN111353097A (en) * 2020-02-19 2020-06-30 珠海格力电器股份有限公司 Intelligent cosmetic mirror, control method and system thereof, electronic device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106108523A (en) * 2016-08-22 2016-11-16 陈云 A kind of portable intelligent cosmetic mirror, cosmetic aid system and method
CN108154121A (en) * 2017-12-25 2018-06-12 深圳市美丽控电子商务有限公司 Cosmetic auxiliary method, smart mirror and storage medium based on smart mirror
CN109671142A (en) * 2018-11-23 2019-04-23 南京图玩智能科技有限公司 A kind of intelligence makeups method and intelligent makeups mirror
CN109784281A (en) * 2019-01-18 2019-05-21 深圳壹账通智能科技有限公司 Products Show method, apparatus and computer equipment based on face characteristic
CN111353097A (en) * 2020-02-19 2020-06-30 珠海格力电器股份有限公司 Intelligent cosmetic mirror, control method and system thereof, electronic device and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114340079A (en) * 2021-12-29 2022-04-12 苏州鑫凯威科创发展有限公司 Cosmetic mirror with lamp and self-adaptive light projection method thereof
CN114340079B (en) * 2021-12-29 2024-02-02 苏州鑫凯威科创发展有限公司 Cosmetic mirror with lamp and self-adaptive light projection method thereof
CN117596741A (en) * 2023-12-08 2024-02-23 东莞莱姆森科技建材有限公司 Intelligent mirror control method and system capable of automatically adjusting light rays

Similar Documents

Publication Publication Date Title
Lu et al. Matching 2.5 D face scans to 3D models
CN110363183B (en) Service robot visual image privacy protection method based on generating type countermeasure network
Li et al. Illumination invariant face recognition using near-infrared images
CN106980852B (en) Based on Corner Detection and the medicine identifying system matched and its recognition methods
US20110141258A1 (en) Emotion recognition method and system thereof
US20060280343A1 (en) Bilinear illumination model for robust face recognition
Berretti et al. 3d mesh decomposition using reeb graphs
CN111401145B (en) Visible light iris recognition method based on deep learning and DS evidence theory
CN110263768A (en) A kind of face identification method based on depth residual error network
Martinikorena et al. Fast and robust ellipse detection algorithm for head-mounted eye tracking systems
WO2020114135A1 (en) Feature recognition method and apparatus
CN110929570B (en) Iris rapid positioning device and positioning method thereof
CN113947807B (en) Method and system for identifying fundus image abnormity based on unsupervised
CN108573219A (en) A kind of eyelid key point accurate positioning method based on depth convolutional neural networks
CN114445879A (en) High-precision face recognition method and face recognition equipment
CN112800884A (en) Intelligent auxiliary method based on cosmetic mirror
CN112800884B (en) Intelligent auxiliary method based on cosmetic mirror
CN113283466A (en) Instrument reading identification method and device and readable storage medium
CN111738062A (en) Automatic re-identification method and system based on embedded platform
CN206363347U (en) Based on Corner Detection and the medicine identifying system that matches
CN112949385B (en) Water surface target detection and identification method based on optical vision
CN115797987A (en) Finger vein identification method based on joint loss and convolutional neural network
CN114550247A (en) Facial expression recognition method and system with expression intensity change and storage medium
CN111652014A (en) Eye spirit identification method
Divya et al. Review on the proportional study of segmentation techniques for iris acknowledgment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant