CN111553219A - Data processing method and intelligent equipment - Google Patents

Data processing method and intelligent equipment Download PDF

Info

Publication number
CN111553219A
CN111553219A CN202010315152.6A CN202010315152A CN111553219A CN 111553219 A CN111553219 A CN 111553219A CN 202010315152 A CN202010315152 A CN 202010315152A CN 111553219 A CN111553219 A CN 111553219A
Authority
CN
China
Prior art keywords
lip
straight line
type
information
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010315152.6A
Other languages
Chinese (zh)
Inventor
李广琴
黄利
朱琳清
孙锦
刘晓潇
杨斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Co Ltd
Hisense Co Ltd
Original Assignee
Hisense Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Co Ltd filed Critical Hisense Co Ltd
Priority to CN202010315152.6A priority Critical patent/CN111553219A/en
Publication of CN111553219A publication Critical patent/CN111553219A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

The invention discloses a data processing method and intelligent equipment, which can accurately position the position of lips in a face by analyzing a reference image, determine the types of the lips, and provide auxiliary information and guidance for a user according to the types of the lips of the user so that the user can make up quickly and accurately; meanwhile, the lip makeup can be made up according to the auxiliary information displayed by the display while a user looks into the mirror through the mirror surface function of the display, so that the lip makeup better meets the characteristics of the user, the requirements of the user are met, and the user experience is improved.

Description

Data processing method and intelligent equipment
Technical Field
The present invention relates to the field of intelligent device technologies, and in particular, to a data processing method and an intelligent device.
Background
In daily life, for a user who needs makeup, when the user makes up the makeup in a mirror, if the makeup technique is not high, the user may not be able to make up the makeup according to the characteristics of the user, especially the lip makeup, and the user's requirements cannot be met.
Therefore, the technical problem to be solved by the technical staff in the field is how to help users to make up according with their own features during makeup.
Disclosure of Invention
The embodiment of the invention provides a data processing method and intelligent equipment, which are used for helping a user to make up according with the characteristics of the user during makeup.
In a first aspect, an embodiment of the present invention provides an intelligent device, including:
a display configured to display a screen, the display having a mirror function;
an image collector configured to: collecting face information of a user to obtain a reference image and transmitting the reference image to a processor;
the processor configured to:
determining key point information of the face according to the reference image;
determining the type of the lip according to the determined key point information;
and according to the determined type of the lip, determining auxiliary information for performing auxiliary operation on the lip makeup, and displaying the auxiliary information on the display.
In some embodiments, the processor is configured to:
and adjusting the display position of the auxiliary information in the display according to the position of the lips in the reference image acquired by the image acquirer, so that the position of the auxiliary information displayed on the display corresponds to the area of the lips.
In some embodiments, the assistance information comprises: auxiliary lines and/or text messages.
In certain embodiments, further comprising:
a player configured to play voice information;
the processor configured to:
and according to the determined type of the lip, determining voice prompt information for performing auxiliary operation on the lip makeup, and playing the voice prompt information through the player.
In some embodiments, the processor is configured to:
determining parameters of the lip according to the key point information;
determining the type of the lip according to the parameters of the lip.
In some embodiments, the types of lips include:
determining a first type according to the thickness of the upper lip and the thickness of the lower lip;
determining a second type according to the position relation between a boundary point and a mouth corner, wherein the boundary point is a point between the upper lip and the lower lip;
a third type is determined according to the sum of the thickness of the upper lip and the thickness of the lower lip and the length of the lip;
a fourth type determined from the lip peaks and the lip valleys.
In some embodiments, the keypoint information comprises: two lip peaks, a plurality of said intersections, and a specific point of said lower lip furthest from said upper lip;
the straight line where the two lip peak points are located is a first straight line, the straight line which is parallel to the first straight line and passes through the specific point is a second straight line, and the determined straight line meeting the preset rule is a third straight line according to the first straight line and each junction point;
the thickness of the upper lip is as follows: a distance between the first line and the third line;
the thickness of the lower lip is as follows: a spacing between the second line and the third line.
In some embodiments, the preset rules include:
when the first straight line is translated along a first direction, the first straight line at a corresponding position when the sum of the distances from all the intersection points to the first straight line is minimum is the third straight line;
the first direction is a direction perpendicular to the first straight line and pointing to the second straight line.
In some embodiments, the keypoint information comprises: two lip peak points, two lip valley points in the upper lip, and two lip corner points;
the two lip peak points are respectively positioned at two sides of a central line of the lip part, the two lip valley points are respectively positioned at two sides of the central line of the lip part, and the central line of the lip part is vertical to a straight line where the two mouth angle points are positioned;
straight lines passing through the lip peak point and the lip valley point on the same side of the center line of the lip part are a fourth straight line and a fifth straight line respectively, and the fourth straight line is intersected with the fifth straight line;
the fourth type is determined according to an included angle between the fourth straight line and the fifth straight line.
In a second aspect, an embodiment of the present invention provides a data processing method, including:
determining key point information of a face in a reference image according to the acquired reference image comprising the face information of the user;
determining the type of the lip according to the determined key point information;
and determining auxiliary information for performing auxiliary operation on the lip makeup according to the determined type of the lip.
The invention has the following beneficial effects:
according to the data processing method and the intelligent device provided by the embodiment of the invention, through the analysis of the reference image, the position of the lips in the face can be accurately positioned, the types of the lips can be determined, and then auxiliary information and guidance are provided for a user according to the types of the lips of the user, so that the user can make up quickly and accurately; meanwhile, the lip makeup can be made up according to the auxiliary information displayed by the display while a user looks into the mirror through the mirror surface function of the display, so that the lip makeup better meets the characteristics of the user, the requirements of the user are met, and the user experience is improved.
Drawings
Fig. 1 is a schematic structural diagram of an intelligent device provided in an embodiment of the present invention;
fig. 2 is a schematic structural diagram of another intelligent device provided in the embodiment of the present invention;
FIG. 3 is an image including keypoint information provided in an embodiment of the invention;
FIG. 4 is a schematic diagram of the relationship between the auxiliary information and the lips provided in an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another smart device provided in the embodiment of the present invention;
FIG. 6 is a schematic view of a standard lip shape;
FIG. 7 is a schematic view of a low profile upper lip provided in an embodiment of the present invention;
FIG. 8 is a schematic view of a lower lip profile provided in an embodiment of the present invention;
FIG. 9 is a schematic view of a lip drop version of the present invention in an embodiment;
FIG. 10 is a schematic view of a thick lip type provided in an embodiment of the present invention;
FIG. 11 is a schematic view of a thin lip type according to an embodiment of the present invention;
FIG. 12 is a schematic view of an elliptical lip shape provided in an embodiment of the present invention;
FIG. 13 is a schematic diagram of keypoint information for determining upper/lower lip thinness provided in an embodiment of the present invention;
fig. 14 is a schematic diagram of the key point information for determining the thin and thick lip shapes provided in the embodiment of the present invention;
FIG. 15 is a diagram illustrating keypoint information for determining elliptical lip shapes, provided in an embodiment of the present invention;
fig. 16 is a flowchart of a data processing method provided in an embodiment of the present invention.
The system comprises a display, an image collector, a processor and a player, wherein the display is 10-20-the image collector, the processor is 30-the player is 40-the player is provided.
Detailed Description
The following describes in detail a specific implementation of a data processing method and an intelligent device according to an embodiment of the present invention with reference to the accompanying drawings. It should be noted that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An embodiment of the present invention provides an intelligent device, as shown in fig. 1, which may include:
a display 10 configured to display a screen, and the display 10 having a mirror function;
an image acquirer 20 configured to: collecting face information of a user to obtain a reference image and transmitting the reference image to the processor 30;
a processor 30 configured to:
determining key point information of the face according to the reference image;
determining the type of the lip according to the determined key point information;
according to the determined type of the lips, auxiliary information for assisting the lip makeup is determined and displayed on the display 10.
In this way, through the analysis of the reference image, the position of the lips in the face can be accurately positioned, the types of the lips can be determined, and then auxiliary information and guidance are provided for the user according to the types of the lips of the user, so that the user can make up quickly and accurately; meanwhile, the lip makeup can be made up according to the auxiliary information displayed by the display while a user looks into the mirror through the mirror surface function of the display, so that the lip makeup better meets the characteristics of the user, the requirements of the user are met, and the user experience is improved.
It should be noted that, in the embodiment of the present invention, the display needs to have a mirror function, so that a user can implement a mirror function through the display, so as to finish making up on the lips through a mirror; meanwhile, auxiliary information can be displayed through the display, so that the user can provide auxiliary guidance for making up the lips conveniently.
Furthermore, the intelligent device in the embodiment of the present invention may be a mirror with a display function (as shown in fig. 2, and a processor is not shown in the figure), or a display device with a mirror function, and the display device may be a device with a display function, such as a mobile terminal or a television, and is not limited herein.
In addition, the image collector may be an image collecting structure such as a camera, as long as image collection can be achieved, and the specific implementation structure is not specifically limited herein.
Optionally, in the embodiment of the present invention, when determining the key point information of the face according to the reference image, the following manner may be adopted:
the reference image is analyzed and processed by using a PFLD (a Practical Facial Landmark Detector, a face key point detection algorithm).
The algorithm has high accuracy under the complex conditions of unconstrained posture, expression, illumination and shielding, and can effectively solve the problems of geometric constraint and data imbalance; moreover, the receptive field can be enlarged, the global structure of the human face can be better captured, the key points in the image can be accurately positioned, and the image with the key points is shown in fig. 3.
Of course, in practical cases, when determining the key point information of the face according to the reference image, the algorithm is not limited to the above-mentioned PFLD algorithm, and may be other algorithms that can determine the key point information of the face, and is not limited herein.
Optionally, in this embodiment of the present invention, the processor may be further configured to:
and adjusting the display position of the auxiliary information in the display according to the position of the lips in the reference image acquired by the image acquirer, so that the position of the auxiliary information displayed on the display corresponds to the area where the lips are located.
In practical situations, when a user applies makeup, the body of the user may move, such as moving left and right or moving up and down, so that the position of the lips in the display may change, and in order to still provide accurate and effective auxiliary information for the user when the position of the lips changes, it is necessary to make the display position of the auxiliary information always correspond to the area where the lips are located, as shown in fig. 4.
Therefore, the face information of the user can be collected in real time (or according to a certain collection period) through the image collector, the position of the lips in the reference image is obtained after the reference image is analyzed and processed, and the position of the auxiliary information to be displayed is adjusted according to the position, so that the display position of the auxiliary information can always correspond to the area where the lips are located, and accurate and effective assistance and guidance are provided for the user.
Optionally, in this embodiment of the present invention, the auxiliary information may include: auxiliary lines and/or text messages.
For example, as shown in (a) to (d) in fig. 4, an auxiliary line (as a dotted line in the figure) and characters are shown, and as shown in (e) and (f) in fig. 4, an auxiliary line is shown.
Therefore, by providing auxiliary lines and character guidance, the defects of the lips can be effectively modified, better lip makeup is presented, the requirements of different users are met, and the application range is wide.
Optionally, in this embodiment of the present invention, as shown in fig. 5, the intelligent device may further include:
a player 40 configured to play voice information;
at this time, the processor 30 may be further configured to:
and determining voice prompt information for assisting lip makeup according to the determined type of the lips, and playing the voice prompt information through the player 40.
That is to say, except that the auxiliary line and corresponding characters can be displayed on the display, the auxiliary function can be provided in a voice mode, so that multi-directional and comprehensive assistance and guidance can be provided for a user, and the experience of the user is improved.
Optionally, in this embodiment of the present invention, when determining the type of the lip according to the determined key point information, the processor is configured to:
determining parameters of the lip according to the key point information;
the type of the lips is determined from the parameters of the lips.
The determined key point information may include, but is not limited to, 240 face key point positions, and parameters of the lips, such as, but not limited to, the thickness of the upper lip, the thickness of the lower lip, and the length of the lips, are determined through analysis and calculation according to the determined key point positions, and the type of the lips can be determined through the parameters.
In addition, the following conditions are generally satisfied for the standard lip type:
1. the ratio of the thickness h1 of the upper lip to the thickness h2 of the lower lip is about 1:1.3 to 1:1.5, as shown in FIG. 6;
2. the distance between the left mouth corner and the left lip peak is a first distance P1, the distance between the two lip peaks is a second distance P2, and the distance between the right lip peak and the right mouth corner is a point distance P3, wherein P1: P2: P3 is 1:0.8:1, as shown in FIG. 6;
3. the lip peak is positioned right below the center of the nostril;
4. the connecting line between the left lip peak and the left lip valley is a first connecting line 1, and the connecting line between the right lip peak and the right lip valley is a second connecting line, wherein the included angle between the first connecting line and the second connecting line is less than or equal to 90 degrees.
5. The ratio of the length to the height of the lip is 2.2 to 2.9.
The conditions for the standard lip shape are obtained by counting and analyzing a large number of lips.
For a particular lip type, the following may be included, but not limited to:
1. the upper lip is thin.
Wherein the ratio of the thickness of the upper lip to the thickness of the lower lip may be less than 1:1.3, as shown in fig. 7.
2. The lower lip is thin.
Wherein the ratio of the thickness of the upper lip to the thickness of the lower lip may be greater than 1:1.5, as shown in fig. 8.
3. A drooping mouth corner type.
The line connecting the mouth corners at the two sides is a third line S3, the intersection points between the upper lip and the lower lip are many, and 80% of the intersection points are located at the side of the third line S3 close to the upper lip, as shown in fig. 9.
4. A thick lip type.
Wherein the ratio of the length of the lip portion to the height of the lip portion (i.e., the sum of the thickness of the upper lip and the thickness of the lower lip) is 1.4 to 2.0, as shown in fig. 10.
5. Thin lip type.
Wherein the ratio of the length of the lip portion to the height of the lip portion (i.e., the sum of the thickness of the upper lip and the thickness of the lower lip) is 3.0 to 4.0, as shown in fig. 11.
6. An oval lip shape.
Wherein the included angle θ between the first line S1 and the second line S2 may be greater than 90 °, as shown in fig. 12.
Of course, in practical cases, the type of the lip is not limited to the 6 listed above, but may be other specific shapes, and these 6 are only exemplified here.
Based on this, optionally, in the present embodiment, the types of the lip may include:
determining a first type according to the thickness of the upper lip and the thickness of the lower lip;
determining a second type according to the position relation between a boundary point and a mouth corner, wherein the boundary point is a point between an upper lip and a lower lip;
a third type is determined according to the sum of the thickness of the upper lip and the thickness of the lower lip and the length of the lip;
a fourth type determined from the lip peaks and the lip valleys.
That is, for the first type, it may correspond to the upper lip low profile and the lower lip low profile mentioned in the foregoing; for the second type, it may correspond to the mouth-droop type mentioned in the foregoing; for the third type, it may correspond to the thick lip type and the thin lip type mentioned in the foregoing; for the fourth type, it may correspond to the elliptical lip type mentioned in the foregoing.
Therefore, the type of the lips can be accurately analyzed through the determined parameters of the lips, so that effective and accurate assistance and guidance are provided for a user to make up the lips according to the type of the lips in the follow-up process, the making-up efficiency of the lips is improved, the making-up effect of the lips can be improved, and the making-up requirement of the user is met.
The four types of determination processes mentioned above are explained below.
1. A first type.
Optionally, in this embodiment of the present invention, the key point information includes: two lip peaks, a plurality of intersections, and a specific point of the lower lip furthest from the upper lip;
the straight line where the two lip peak points are located is a first straight line, the straight line which is parallel to the first straight line and passes through a specific point is a second straight line, and the determined straight line meeting the preset rule is a third straight line according to the first straight line and each junction point;
the thickness of the upper lip is: the distance between the first straight line and the third straight line;
the thickness of the lower lip is: the distance between the second line and the third line.
Therefore, the thickness of the upper lip and the thickness of the lower lip can be determined according to the key point information in the mode, so that the type of the subsequently determined lip is convenient, and the determination efficiency of the type of the lip can be improved.
And, optionally, in the embodiment of the present invention, the preset rule includes:
when the first straight line is translated along the first direction, the first straight line at the corresponding position is the third straight line when the sum of the distances from all the junction points to the first straight line is minimum;
the first direction is a direction perpendicular to the first straight line and pointing to the second straight line.
So, can confirm the third straight line according to above-mentioned rule of predetermineeing accurately fast to the thickness of the upper lip and the thickness of lower lip that confirm according to the third straight line, and then the type of the lip that determines improves the definite efficiency of lip type.
For example, referring to fig. 13, first, the highest two points of the lip shape, i.e., a lip peak point a and a lip peak point B, are found, a first straight line where the lip peak point a and the lip peak point B are located is represented by L1, and the slope of the first straight line L1 is k 1.
A straight line parallel to the first straight line L1 and passing through the lowermost point C of the lip (i.e., the specific point mentioned in the foregoing) is a second straight line, denoted by L2.
From the keypoint information, it can be determined that the median line of the lips (i.e. the boundary between the upper and lower lips) comprises a total of 6 points (e.g. Z) (i.e. the boundary points mentioned in the foregoing), which can be expressed as: m1(x1, y1), m2(x2, y2),.., m6(x6, y 6).
Then, the first straight line L1 is translated downward (i.e. in the first direction), when the first straight line L1 is translated downward to the position M (M is not shown in the figure), and the sum of the distances from the 6 boundary points to the first straight line L1 at the position M is minimum, the first straight line L1 at the position M is called a third straight line, which is denoted by L3, and the expression of the third straight line L3 can be obtained as follows: and y is k1x + b.
The sum of the distances from the 6 boundary points to the third straight line L3 (i.e., Dmin) can be expressed by the following formula:
Figure BDA0002459151510000101
therefore, based on the first straight line L1, the second straight line L2, and the third straight line L3 determined as described above, the thickness of the upper lip can be determined as the distance between the first straight line L1 and the third straight line L3, which is denoted by h 1; the thickness of the lower lip is the distance between the second line L2 and the third line L3, denoted by h 2.
If the ratio of the thickness h1 of the upper lip to the thickness h2 of the lower lip is less than 1:1.3, the type of the lip is the thin upper lip;
if the ratio of the thickness h1 of the upper lip to the thickness h2 of the lower lip is greater than 1:1.5, the type of the lip is the thin lower lip.
2. And a second type.
The second type corresponds to a mouth corner droop type, and the mouth corner droop type is characterized in that: the corners of the mouth on both sides drop and the point on the midline of the lips (i.e., the line of intersection between the upper and lower lips) (i.e., the intersection point mentioned in the foregoing) is mostly above the line connecting the corners of the mouth.
Thus, based on the above features, the following procedure can be used to determine the type of lip:
referring to fig. 9, the mouth corners E and F on both sides are determined, and the connection line between the points E and F is determined as a third connection line S3; taking the third connection S3 as a boundary, calculating the position distribution conditions of all the boundary points Z on the upper side and the lower side of the third connecting line S3;
if 80% or more of the boundary points are located above the third line S3, i.e., 80% or more of the boundary points are located on the side of the third line S3 close to the upper lip, the lip portion may be determined to be of the mouth-droop type.
3. And a third type.
To explain this point, the third type corresponds to a thick lip type and a thin lip type, and in order to be able to accurately determine whether the third type is a thick lip type or a thin lip type, the first type needs to be excluded, that is, when the third type is determined, the following conditions need to be satisfied:
the ratio of the thickness of the upper lip to the thickness of the lower lip is 1:1.3 to 1: 1.5.
Therefore, it is determined whether the film is thick or thin on the basis of the above conditions.
For example, referring to fig. 14, a lip peak point a and a lip peak point B are determined, and a first straight line L1 where the lip peak point a and the lip peak point B are located, L1 is translated downward to a lowest point C of the lower lip edge (i.e., the specific point mentioned in the foregoing) to obtain a second straight line L2, that is, the second straight line L2 is a straight line parallel to the first straight line L1 and passing through the specific point C;
then:
the height of the lip can be understood as: the sum of the thickness of the upper lip and the thickness of the lower lip, i.e. the distance H1 between the first straight line L1 and the second straight line L2;
determining a mouth corner point E and a mouth corner point F on two sides, and determining that a straight line passing through the mouth corner point E and perpendicular to the first straight line L1 is a sixth straight line L6, and a straight line passing through the mouth corner point F and perpendicular to the first straight line L1 is a seventh straight line L7, then:
the length of the lip can be understood as: a distance W1 between the sixth straight line L6 and the seventh straight line L7.
Therefore, if the ratio of the length W1 of the lip to the height H1 of the lip is 3.0 to 4.0, this indicates that the type of the lip is a thin lip type; if the ratio of the length W1 of the lip to the height H1 of the lip is 1.4 to 2.0, the lip is of a thick lip type.
4. And a fourth type.
Optionally, in this embodiment of the present invention, the key point information includes: two lip peak points, two lip valley points in the upper lip, and two lip corner points;
the two lip peak points are respectively positioned at two sides of the central line of the lip part, the two lip valley points are respectively positioned at two sides of the central line of the lip part, and the central line of the lip part is vertical to the straight line where the two mouth angle points are positioned;
the straight lines passing through the lip peak point and the lip valley point on the same side of the center line of the lip part are respectively a fourth straight line and a fifth straight line, and the fourth straight line is intersected with the fifth straight line;
the fourth type is determined according to an angle between the fourth line and the fifth line.
To explain this point, the fourth type corresponds to the elliptical lip shape, and in order to be able to accurately determine whether the shape of the elliptical lip is the elliptical lip shape, the first type and the third type need to be excluded, that is, when the fourth type is determined, the following conditions need to be satisfied:
the ratio of the thickness of the upper lip to the thickness of the lower lip is 1:1.3-1: 1.5;
the ratio between the length W1 of the lip and the height H1 of the lip is 2.2 to 2.9.
Therefore, it is determined whether the lip shape is an elliptical lip shape or not, on the basis of satisfying the above conditions.
For example, referring to fig. 15, on the basis of satisfying the above conditions, the lip peak point a and the lip peak point B, and the lip valley point a1 and the lip valley point B1 are determined, and the straight line where the lip peak point a and the lip valley point a1 are located is a fourth straight line, denoted by L4, with a slope of k 4; the straight line between the lip peak point B and the lip valley point B1 is a fifth straight line denoted by L5, and has a slope of k 5. At this time:
calculating an included angle θ between the fourth straight line L4 and the fifth straight line L5 according to the following formula:
Figure BDA0002459151510000121
if the calculated theta is larger than 90 degrees, the lip is in an elliptical lip shape.
Based on the same inventive concept, an embodiment of the present invention provides a data processing method, as shown in fig. 16, the method may include:
s1601, determining key point information of a face in a reference image according to the acquired reference image comprising the face information of the user;
when acquiring the reference image, the reference image may be acquired by, but not limited to, using an image acquirer.
S1602, determining the type of the lip according to the determined key point information;
s1603, determining auxiliary information for assisting the lip makeup based on the determined type of the lip.
Specifically, if the method is applied to the smart device and the smart device has a mirror display, the auxiliary information may be displayed on the display, so that the user may provide the auxiliary information when the user makes up the eyebrows using the mirror function of the display.
Therefore, through the analysis of the reference image, the position of the lips in the face can be accurately positioned, the types of the lips can be determined, and then auxiliary information and guidance are provided for a user according to the types of the lips of the user, so that the user can make up quickly and accurately, the requirements of the user are met, and the experience of the user is improved.
Optionally, in the embodiment of the present invention, determining the type of the lip according to the determined key point information specifically includes:
determining parameters of the lip according to the key point information;
determining the type of the lip according to the parameters of the lip.
Optionally, in an embodiment of the present invention, the types of the lip include:
determining a first type according to the thickness of the upper lip and the thickness of the lower lip;
determining a second type according to the position relation between a boundary point and a mouth corner, wherein the boundary point is a point between the upper lip and the lower lip;
a third type is determined according to the sum of the thickness of the upper lip and the thickness of the lower lip and the length of the lip;
a fourth type determined from the lip peaks and the lip valleys.
For a point, regarding the implementation manner of the data processing method, reference may be made to the specific embodiment of the foregoing intelligent device, and repeated details are not repeated.
Based on the same inventive concept, the embodiment of the invention provides a readable storage medium, wherein the readable storage medium stores an intelligent device executable instruction, and the intelligent device executable instruction is used for enabling an intelligent device to execute the data processing method.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A smart device, comprising:
a display configured to display a screen, the display having a mirror function;
an image collector configured to: collecting face information of a user to obtain a reference image and transmitting the reference image to a processor;
the processor configured to:
determining key point information of the face according to the reference image;
determining the type of the lip according to the determined key point information;
and according to the determined type of the lip, determining auxiliary information for performing auxiliary operation on the lip makeup, and displaying the auxiliary information on the display.
2. The smart device of claim 1, wherein the processor is configured to:
and adjusting the display position of the auxiliary information in the display according to the position of the lips in the reference image acquired by the image acquirer, so that the position of the auxiliary information displayed on the display corresponds to the area of the lips.
3. The smart device of claim 1, wherein the assistance information comprises: auxiliary lines and/or text messages.
4. The smart device of claim 1, further comprising:
a player configured to play voice information;
the processor configured to:
and according to the determined type of the lip, determining voice prompt information for performing auxiliary operation on the lip makeup, and playing the voice prompt information through the player.
5. The smart device of claim 1, wherein the processor is configured to:
determining parameters of the lip according to the key point information;
determining the type of the lip according to the parameters of the lip.
6. The smart device of claim 5, wherein the types of lips comprise:
determining a first type according to the thickness of the upper lip and the thickness of the lower lip;
determining a second type according to the position relation between a boundary point and a mouth corner, wherein the boundary point is a point between the upper lip and the lower lip;
a third type is determined according to the sum of the thickness of the upper lip and the thickness of the lower lip and the length of the lip;
a fourth type determined from the lip peaks and the lip valleys.
7. The smart device of claim 6, wherein the keypoint information comprises: two lip peaks, a plurality of said intersections, and a specific point of said lower lip furthest from said upper lip;
the straight line where the two lip peak points are located is a first straight line, the straight line which is parallel to the first straight line and passes through the specific point is a second straight line, and the determined straight line meeting the preset rule is a third straight line according to the first straight line and each junction point;
the thickness of the upper lip is as follows: a distance between the first line and the third line;
the thickness of the lower lip is as follows: a spacing between the second line and the third line.
8. The smart device of claim 7, wherein the preset rules comprise:
when the first straight line is translated along a first direction, the first straight line at a corresponding position when the sum of the distances from all the intersection points to the first straight line is minimum is the third straight line;
the first direction is a direction perpendicular to the first straight line and pointing to the second straight line.
9. The smart device of claim 6, wherein the keypoint information comprises: two lip peak points, two lip valley points in the upper lip, and two lip corner points;
the two lip peak points are respectively positioned at two sides of a central line of the lip part, the two lip valley points are respectively positioned at two sides of the central line of the lip part, and the central line of the lip part is vertical to a straight line where the two mouth angle points are positioned;
straight lines passing through the lip peak point and the lip valley point on the same side of the center line of the lip part are a fourth straight line and a fifth straight line respectively, and the fourth straight line is intersected with the fifth straight line;
the fourth type is determined according to an included angle between the fourth straight line and the fifth straight line.
10. A data processing method, comprising:
determining key point information of a face in a reference image according to the acquired reference image comprising the face information of the user;
determining the type of the lip according to the determined key point information;
and determining auxiliary information for performing auxiliary operation on the lip makeup according to the determined type of the lip.
CN202010315152.6A 2020-04-21 2020-04-21 Data processing method and intelligent equipment Pending CN111553219A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010315152.6A CN111553219A (en) 2020-04-21 2020-04-21 Data processing method and intelligent equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010315152.6A CN111553219A (en) 2020-04-21 2020-04-21 Data processing method and intelligent equipment

Publications (1)

Publication Number Publication Date
CN111553219A true CN111553219A (en) 2020-08-18

Family

ID=72000333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010315152.6A Pending CN111553219A (en) 2020-04-21 2020-04-21 Data processing method and intelligent equipment

Country Status (1)

Country Link
CN (1) CN111553219A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009064423A (en) * 2007-08-10 2009-03-26 Shiseido Co Ltd Makeup simulation system, makeup simulation device, makeup simulation method, and makeup simulation program
US20090139536A1 (en) * 2004-10-22 2009-06-04 Shiseido Co., Ltd. Lip categorizing method, makeup method, categorizing map, and makeup tool
JP2014023127A (en) * 2012-07-23 2014-02-03 Sharp Corp Information display device, information display method, control program, and recording medium
CN105101836A (en) * 2013-02-28 2015-11-25 松下知识产权经营株式会社 Makeup assistance device, makeup assistance method, and makeup assistance program
CN108062400A (en) * 2017-12-25 2018-05-22 深圳市美丽控电子商务有限公司 Examination cosmetic method, smart mirror and storage medium based on smart mirror
US20180315337A1 (en) * 2017-04-27 2018-11-01 Cal-Comp Big Data, Inc. Lip gloss guide device and method thereof
CN108804975A (en) * 2017-04-27 2018-11-13 丽宝大数据股份有限公司 Lip gloss guidance device and method
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN110069716A (en) * 2019-04-29 2019-07-30 清华大学深圳研究生院 A kind of makeups recommended method, system and computer readable storage medium
CN110119968A (en) * 2018-02-06 2019-08-13 英属开曼群岛商玩美股份有限公司 System and method based on face's analysis recommended products

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090139536A1 (en) * 2004-10-22 2009-06-04 Shiseido Co., Ltd. Lip categorizing method, makeup method, categorizing map, and makeup tool
JP2009064423A (en) * 2007-08-10 2009-03-26 Shiseido Co Ltd Makeup simulation system, makeup simulation device, makeup simulation method, and makeup simulation program
JP2014023127A (en) * 2012-07-23 2014-02-03 Sharp Corp Information display device, information display method, control program, and recording medium
CN105101836A (en) * 2013-02-28 2015-11-25 松下知识产权经营株式会社 Makeup assistance device, makeup assistance method, and makeup assistance program
US20180315337A1 (en) * 2017-04-27 2018-11-01 Cal-Comp Big Data, Inc. Lip gloss guide device and method thereof
CN108804975A (en) * 2017-04-27 2018-11-13 丽宝大数据股份有限公司 Lip gloss guidance device and method
CN108062400A (en) * 2017-12-25 2018-05-22 深圳市美丽控电子商务有限公司 Examination cosmetic method, smart mirror and storage medium based on smart mirror
CN110119968A (en) * 2018-02-06 2019-08-13 英属开曼群岛商玩美股份有限公司 System and method based on face's analysis recommended products
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN110069716A (en) * 2019-04-29 2019-07-30 清华大学深圳研究生院 A kind of makeups recommended method, system and computer readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张宝心;贾云龙;可可;: "嘴唇的修饰与矫正", no. 04 *
金明姬等: "最新文饰美容技术", 辽宁科学技术出版社, pages: 115 - 116 *
魏爽;: "基于BP神经网络的嘴型分类算法", no. 08 *

Similar Documents

Publication Publication Date Title
CN101561710B (en) Man-machine interaction method based on estimation of human face posture
CN103365599B (en) Mobile terminal operation optimization method and device based on screen sliding track
US20140064602A1 (en) Method and apparatus for object positioning by using depth images
CN110705478A (en) Face tracking method, device, equipment and storage medium
JP6453488B2 (en) Statistical method and apparatus for passersby based on identification of human head top
WO2012101962A1 (en) State-of-posture estimation device and state-of-posture estimation method
US20090129631A1 (en) Method of Tracking the Position of the Head in Real Time in a Video Image Stream
WO2015149712A1 (en) Pointing interaction method, device and system
WO2013091370A1 (en) Human body part detection method based on parallel statistics learning of 3d depth image information
CN103093498A (en) Three-dimensional human face automatic standardization method
CN112101208A (en) Feature series fusion gesture recognition method and device for elderly people
CN105912126A (en) Method for adaptively adjusting gain, mapped to interface, of gesture movement
JP2008204200A (en) Face analysis system and program
Ren et al. Hand gesture recognition with multiscale weighted histogram of contour direction normalization for wearable applications
CN105975906A (en) PCA static gesture recognition method based on area characteristic
CN113723264A (en) Method and system for intelligently identifying playing errors for assisting piano teaching
CN105488491A (en) Human body sleep posture detection method based on pyramid matching histogram intersection kernel
WO2022267653A1 (en) Image processing method, electronic device, and computer readable storage medium
Xu et al. Robust hand gesture recognition based on RGB-D Data for natural human–computer interaction
CN111209811A (en) Method and system for detecting eyeball attention position in real time
CN114332927A (en) Classroom hand-raising behavior detection method, system, computer equipment and storage medium
CN111553219A (en) Data processing method and intelligent equipment
CN105955473A (en) Computer-based static gesture image recognition interactive system
WO2023246321A1 (en) Method and system for generating thermodynamic diagram of pedestrians
CN104156689A (en) Method and device for positioning feature information of target object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination