CN113781271A - Makeup teaching method and device, electronic equipment and storage medium - Google Patents

Makeup teaching method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113781271A
CN113781271A CN202110874705.6A CN202110874705A CN113781271A CN 113781271 A CN113781271 A CN 113781271A CN 202110874705 A CN202110874705 A CN 202110874705A CN 113781271 A CN113781271 A CN 113781271A
Authority
CN
China
Prior art keywords
makeup
target user
teaching
information
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110874705.6A
Other languages
Chinese (zh)
Inventor
刘海敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd, Beijing Megvii Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN202110874705.6A priority Critical patent/CN113781271A/en
Publication of CN113781271A publication Critical patent/CN113781271A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • G06Q50/2057Career enhancement or continuing education service
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying

Abstract

The embodiment of the application provides a makeup teaching method and device, electronic equipment and a storage medium, wherein the method comprises the following steps: responding to a makeup teaching request triggered by a target user, and acquiring a face makeup teaching material corresponding to the target user; wherein the face makeup teaching material includes makeup teaching image information indicating makeup information including makeup position information; determining a target makeup location of the target user based on the facial image of the target user and the makeup location information; and instructing the target user to execute a corresponding makeup operation at the target makeup position. This scheme plays the suggestion effect of making up, makes the student can keep up with the teaching rhythm, has improved the teaching success rate of making up.

Description

Makeup teaching method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a makeup teaching method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Makeup, also called as makeup. The method is to apply cosmetics and tools, and adopts steps and skills according to rules to render, draw and arrange the face, five sense organs and other parts of the human body, so as to enhance the three-dimensional impression, adjust the shape and color, mask the defects and express the expression of magical appearance, thereby achieving the purpose of beautifying the visual perception. The makeup can show the unique natural beauty of the human body; can improve the original shape and color of the characters and increase the aesthetic feeling and charm.
At present, a plurality of video makeup teaching modes also exist. However, most of the existing video makeup teaching methods are videos in which a teacher records makeup on the teacher or a face model or a model face, and the videos are used for users to learn. However, the makeup teaching mode is separated from the interaction with learners, and the teaching effect is poor.
Disclosure of Invention
The embodiment of the application provides a makeup teaching method for guiding a user to make up and providing a makeup success rate.
The embodiment of the application provides a makeup teaching method, which comprises the following steps:
responding to a makeup teaching request triggered by a target user, and acquiring a face makeup teaching material corresponding to the target user; wherein the face makeup teaching material includes makeup teaching image information indicating makeup information including makeup position information;
determining a target makeup location of the target user based on the facial image of the target user and the makeup location information;
and instructing the target user to execute a corresponding makeup operation at the target makeup position.
In an embodiment, the obtaining, in response to a makeup teaching request triggered by a target user, a facial makeup teaching material corresponding to the target user includes:
responding to the makeup teaching request, and acquiring a face image of the target user;
determining makeup type information matched with the target user according to the face image;
and determining the face makeup teaching material corresponding to the target user according to the makeup type information.
In one embodiment, the makeup teaching request carries makeup type information;
correspondingly, the obtaining of the facial makeup teaching material corresponding to the target user in response to the makeup teaching request triggered by the target user includes:
and determining the face makeup teaching material corresponding to the target user according to the makeup type information.
In an embodiment, the determining, according to the makeup type information, a facial makeup teaching material corresponding to the target user includes:
searching a face makeup teaching material corresponding to the makeup type information from a pre-established face makeup teaching material library, and determining the searched face makeup teaching material as the face makeup teaching material corresponding to the target user;
alternatively, the first and second electrodes may be,
searching a face makeup teaching material corresponding to the makeup type information from a pre-established face makeup teaching material library; determining currently existing basic makeup information of the target user based on the facial image of the target user; and based on the current basic makeup information of the target user, eliminating teaching materials corresponding to the basic makeup information in the facial makeup teaching materials to obtain the facial makeup teaching materials corresponding to the target user.
In one embodiment, the determining the target makeup location of the target user based on the facial image of the target user and the makeup location information includes:
extracting first position information of face key points of the face image and second position information of the face key points in the makeup teaching image information;
according to the first position information and the second position information, constructing a position mapping relation between the second position information and the first position information;
and determining the target makeup position of the target user according to the makeup position information indicated in the makeup teaching image information and the position mapping relation.
In one embodiment, the instructing the target user to perform the corresponding makeup operation at the target makeup location includes:
displaying a face image of the target user marked with the target makeup location to cause the target user to perform a makeup operation at the target makeup location.
In one embodiment, the instructing the target user to perform the corresponding makeup operation at the target makeup location includes:
and sending the target makeup position of the face image to a client of the target user, and triggering the client of the target user to display the face image marked with the target makeup position.
In an embodiment, after the instructing the target user to perform the corresponding makeup operation at the target makeup location, the method further includes:
acquiring a current face image of the target user;
determining color values and/or makeup position information of a current makeup based on the current face image;
and indicating the target user to carry out makeup correction according to the color value and/or the makeup position information of the current makeup.
In an embodiment, before the obtaining the facial makeup teaching material corresponding to the target user, the method further includes:
obtaining a makeup teaching video;
removing repeated image frames in the makeup teaching video to obtain at least one frame of information of the makeup teaching image;
determining makeup information in each frame of makeup teaching image information;
and generating a face makeup teaching material based on each frame of makeup teaching image information and corresponding makeup information.
The embodiment of the application still provides a makeup teaching device, includes:
the material acquisition module is used for responding to a makeup teaching request triggered by a target user and acquiring a facial makeup teaching material corresponding to the target user; wherein the face makeup teaching material includes makeup teaching image information indicating makeup information including makeup position information;
a position determination module for determining a target makeup position of the target user based on the facial image of the target user and the makeup position information;
and the makeup indicating module is used for indicating the target user to execute corresponding makeup operation at the target makeup position.
An embodiment of the present application further provides an electronic device, where the electronic device includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the above-described makeup teaching method.
The embodiment of the application also provides a computer readable storage medium, wherein the storage medium stores a computer program, and the computer program can be executed by a processor to complete the makeup teaching method.
According to the technical scheme provided by the embodiment of the application, when the makeup teaching is carried out, the face image of the target user is obtained, the target makeup position in the face image to be made up is found based on the face image of the target user and the makeup position information in each frame of makeup teaching image information, and the target user is instructed to execute corresponding makeup operation at the target makeup position, the link of interaction between the makeup teaching and the face of the target user is increased, the prompting effect of making up at what position is played, the student can completely follow up the teaching rhythm, master the makeup requisition and improve the makeup success rate.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a cosmetic teaching method provided in an embodiment of the present application;
FIG. 3 is a detailed flowchart of step S210 in the corresponding embodiment of FIG. 2;
FIG. 4 is a detailed flowchart of step S220 in the corresponding embodiment of FIG. 2;
FIG. 5 is a schematic view of a makeup correction process provided on the basis of the corresponding embodiment of FIG. 2;
FIG. 6 is a schematic diagram of the generation of a face makeup teaching material according to an embodiment of the present application;
FIG. 7 is a schematic flow chart of a method for applying makeup teaching according to an embodiment of the present application;
FIG. 8 is a schematic flow chart diagram illustrating a method for applying makeup teaching according to another embodiment of the present application;
fig. 9 is a block diagram of a makeup teaching device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
In recent years, technical research based on artificial intelligence, such as computer vision, deep learning, machine learning, image processing, and image recognition, has been actively developed. Artificial Intelligence (AI) is an emerging scientific technology for studying and developing theories, methods, techniques and application systems for simulating and extending human Intelligence. The artificial intelligence subject is a comprehensive subject and relates to various technical categories such as chips, big data, cloud computing, internet of things, distributed storage, deep learning, machine learning and neural networks. Computer vision is used as an important branch of artificial intelligence, particularly a machine is used for identifying the world, and the computer vision technology generally comprises the technologies of face identification, living body detection, fingerprint identification and anti-counterfeiting verification, biological feature identification, face detection, pedestrian detection, target detection, pedestrian identification, image processing, image identification, image semantic understanding, image retrieval, character identification, video processing, video content identification, behavior identification, three-dimensional reconstruction, virtual reality, augmented reality, synchronous positioning and map construction (SLAM), computational photography, robot navigation and positioning and the like. With the research and progress of artificial intelligence technology, the technology is applied to various fields, such as security, city management, traffic management, building management, park management, face passage, face attendance, logistics management, warehouse management, robots, intelligent marketing, computational photography, mobile phone images, cloud services, smart homes, wearable equipment, unmanned driving, automatic driving, smart medical treatment, face payment, face unlocking, fingerprint unlocking, testimony verification, smart screens, smart televisions, cameras, mobile internet, live webcasts, beauty treatment, medical beauty treatment, intelligent temperature measurement and the like.
Fig. 1 is a schematic structural diagram of an electronic device provided in an embodiment of the present application. The electronic device 100 may be used to execute the makeup teaching method provided by the embodiment of the present application. As shown in fig. 1, the electronic device 100 includes: one or more processors 102, and one or more memories 104 storing processor-executable instructions. Wherein the processor 102 is configured to execute the makeup teaching method provided by the following embodiments of the present application.
The processor 102 may be a gateway, or may be an intelligent terminal, or may be a device including a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or other form of processing unit having data processing capability and/or instruction execution capability, and may process data of other components in the electronic device 100, and may control other components in the electronic device 100 to perform desired functions.
The memory 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 102 to implement a method of training a keypoint detection model or a method of keypoint detection as described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
In one embodiment, the electronic device 100 shown in FIG. 1 may also include an input device 106, an output device 108, and a data acquisition device 110, which are interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device 100 may have other components and structures as desired.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like. The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like. The data acquisition device 110 may acquire an image of a subject and store the acquired image in the memory 104 for use by other components. Illustratively, the data acquisition device 110 may be a camera.
In one embodiment, the devices in the example electronic device 100 for implementing the makeup teaching method of the embodiment of the present application may be integrated or distributed, such as the processor 102, the memory 104, the input device 106, and the output device 108 are integrated, and the data acquisition device 110 is separately provided.
In an embodiment, the example electronic device 100 for implementing the makeup teaching method of the embodiment of the present application may be implemented as an intelligent terminal such as a smart phone, a tablet computer, a vehicle-mounted device, a desktop computer, a server, and the like.
Fig. 2 is a schematic flow chart of a makeup teaching method provided in an embodiment of the present application. As shown in fig. 2, the method may include the following steps S210 to S230.
Step S210: and responding to a makeup teaching request triggered by a target user, and acquiring a face makeup teaching material corresponding to the target user.
Wherein the face makeup teaching material includes makeup teaching image information indicating makeup information. The makeup teaching image information is a face image labeled with makeup information. In one embodiment, the makeup information may include makeup location information. In some other embodiments, the makeup information may include any one or more of makeup information to be drawn, information of makeup products to be used, information of makeup tools, and explanation contents at a position indicated by the makeup location information, in addition to the makeup location information. The makeup position information is used for indicating the makeup position in each frame of makeup teaching image, the makeup product information is used for indicating the brand, the color number and the type of cosmetics, the makeup tool information is used for indicating the auxiliary tool for makeup, the explanation content can be in a text or voice form, and the makeup application is explained.
The target user refers to a user who learns makeup. In one embodiment, the target user may click on the makeup teaching software so that the makeup teaching software client receives the makeup teaching request. The target user can also click and select one type of face makeup teaching material, so that the client receives a makeup teaching request. The client terminal responds to the received makeup teaching request, and can obtain the facial makeup teaching materials from the local or the server terminal.
In another embodiment, the makeup teaching request may also be sent by the client to the server, the client sends the makeup teaching request to the server in response to a trigger instruction for the target user to click the makeup teaching software or click to select one type of facial makeup teaching material, and the server can obtain the locally stored facial makeup teaching material corresponding to the target user in response to the makeup teaching request.
It should be noted that there may be only one face makeup teaching material, or there may be a face makeup teaching material suitable for different face shapes (for example, round face, melon seed face, rectangular face, square face, melon seed face) for different face shapes, or a face makeup teaching material for different kinds of makeup (life makeup, party makeup, transparent makeup, smoke makeup, stage makeup).
Step S220: determining a target makeup location of the target user based on the facial image of the target user and the makeup location information.
The target makeup position refers to a makeup position in the face image, and the makeup position corresponds to makeup position information in the makeup teaching image. Determining the target makeup location for the target user may be performed by the client or the server.
Step S230: and instructing the target user to execute a corresponding makeup operation at the target makeup position.
The indication mode can be directly displaying the face image marked with the target makeup position, or sending the face image marked with the target makeup position to a client side for displaying.
It should be noted that an execution subject of the method provided in the embodiment of the present application may be a client, or may also be a server, or may also be implemented by cooperation between the client and the server, for example, step S210 and step S220 are executed by the server, and step S230 is executed by the client, and of course, other combinations may also be implemented, and no description is given in this embodiment of the present application.
In an embodiment, assuming that the target makeup location is calculated by the client, the client may directly display the face image of the target user marked with the target makeup location, thereby playing an effect of teaching that the target user can perform a makeup operation at the target makeup location. The target makeup location may be marked in such a manner that a specific mark is displayed at the target makeup location, and the specific mark may be a red dot, a circle, a cross, or the like.
In an embodiment, assuming that the target makeup location is calculated by the server, the server may issue the target makeup location of the face image to the client where the target user is located. The client side where the target user is located receives the target makeup position, and the client side where the target user is located can be triggered to display the face image marked with the target makeup position, so that the effect of teaching the target user to perform makeup operation at the target makeup position is achieved.
In an embodiment, in the process of displaying the face image of the target user marked with the target makeup position at the client, the information of the makeup product configured correspondingly to each frame of makeup teaching image can be prompted in the form of characters, patterns or voice.
The cosmetic product information may include, among other things, the product type and brand. The same or different makeup product information may be configured for different frames of the makeup teaching image. The information of the cosmetic products corresponding to each frame of the makeup teaching image can be configured in advance.
The presentation is performed in the form of text, pattern, or voice, for example, by displaying a pattern corresponding to the xx brand "lipstick" on a display interface of the face image of the target user, directly displaying the text "xx brand lipstick", and giving a voice prompt "xx brand lipstick". Thereby playing a role of reminding the student of what product to use for makeup.
In an embodiment, in the process of displaying the face image of the target user marked with the target makeup position at the client, a voice teaching segment configured corresponding to the makeup teaching image can be played.
The voice teaching fragments can be recorded in advance and stored in the client or the server. The client can also be downloaded from the server through the network. The voice teaching segment can be regarded as the explanation content of the cosmetic procedure recorded in advance by the cosmetic maker. For example, eyebrow drawing, a section of voice teaching segment for eyebrow drawing can be recorded and bound with the makeup teaching image for eyebrow drawing. When the makeup teaching image for painting eyebrows is obtained, the voice teaching fragment for painting eyebrows can be played.
According to the technical scheme provided by the embodiment of the application, the target makeup position in the face image to be made up can be found based on the makeup position information in each frame of makeup teaching image information, and the target user is indicated to execute the corresponding makeup operation at the target makeup position, so that the effect of prompting the position to make up is achieved, a student can follow the teaching rhythm, master the makeup requirement, and the makeup success rate is improved.
Optionally, in a specific embodiment, as shown in fig. 3, the step S210 may include steps S211 to S213.
Step S211: and responding to the makeup teaching request, and acquiring a face image of the target user.
The facial image of the target user can be acquired by the intelligent equipment where the makeup teaching software client with the camera is located. In one embodiment, the facial image of the target user may be obtained by the client in response to the makeup teaching request. In another embodiment, the server may also obtain the facial image of the target user from the client in response to the makeup teaching request.
Step S212: and determining makeup type information matched with the target user according to the face image.
The client or the server can determine the face contour according to the face image, and further compare the face contour with the face contours of different faces to determine the face of the target user, so that the makeup type information matched with the target user can be the face makeup. For example, the makeup type information may be one of a round face makeup, a melon seed face makeup, a rectangular face makeup, a square face makeup, and a melon seed face makeup.
Step S213: and determining the face makeup teaching material corresponding to the target user according to the makeup type information.
In an embodiment, the client or the server may create a facial makeup teaching material library in advance, wherein the facial makeup teaching material library includes facial makeup teaching materials corresponding to each facial form, and according to the information (e.g., round makeup) of the makeup type determined in step S212, the facial makeup teaching material library corresponding to the information (e.g., round makeup) of the makeup type may be searched for from the facial makeup teaching material library. And taking the searched facial makeup teaching material as a facial makeup teaching material corresponding to the target user.
In another embodiment, given that the target user has finished some makeup, referred to herein as base makeup, the teaching materials are consulted to continue to finish the remaining makeup, so teaching can be performed without starting from the first frame of makeup teaching image. Specifically, the client or the server can search the face makeup teaching material corresponding to the makeup type information from a pre-established face makeup teaching material library. And then, based on the face image of the target user, determining the currently existing basic makeup information of the target user. The basic makeup information is used to indicate which positions of the face image have finished makeup. For example, an unwrouped face image may be compared with a face image of a target user for color of makeup positions, and a makeup position having a large difference in color may be regarded as a position where makeup is completed.
And then, based on the current basic makeup information of the target user, eliminating the teaching material corresponding to the basic makeup information in the facial makeup teaching material to obtain the facial makeup teaching material corresponding to the target user.
The face makeup teaching material indicates makeup position information of each frame of makeup teaching image information, and since the position where makeup is finished does not need to be taught any more, the teaching material corresponding to the position where makeup is finished is deleted from the face makeup teaching material according to the teaching material corresponding to the position where makeup is finished, and the rest teaching material is considered as the teaching material to be finished. And then, makeup teaching can be performed only on the basis of the residual teaching materials to be finished with makeup.
In another embodiment, the makeup type may also be selected by the target user. The target user can select a makeup type (such as stage makeup, round makeup and the like) according to the requirement of the target user. Based on the selection triggered by the target user, the client or the server may receive a makeup teaching request. The makeup teaching request may carry makeup type information. One makeup type can correspond to a face makeup teaching material, so that the client or the server can determine the face makeup teaching material corresponding to the target user according to the makeup type information.
In one embodiment, the face makeup teaching materials corresponding to each makeup type can be stored in a database, and the client or the server can search the face makeup teaching materials corresponding to the makeup type information from a pre-established face makeup teaching material library, and further use the searched face makeup teaching materials as the face makeup teaching materials corresponding to the target user;
in another embodiment, assuming that the target user has finished some basic makeup, after finding the facial makeup teaching material corresponding to the makeup type information from the pre-established facial makeup teaching material library, determining the currently existing basic makeup information of the target user based on the facial image of the target user; and based on the current basic makeup information of the target user, eliminating teaching materials corresponding to the basic makeup information in the facial makeup teaching materials to obtain the facial makeup teaching materials corresponding to the target user for subsequent makeup teaching.
Optionally, in a specific embodiment, as shown in fig. 4, the step S220 specifically includes the following steps S221 to S223.
Step S221: extracting first position information of face key points of the face image and second position information of the face key points in the makeup teaching image information;
the face key points may include the positions of the canthus, the corner of the mouth, the nose and the like. For example, the face key points in the face image and the face key points in the makeup teaching image information may be extracted by a feature point detection algorithm. For discrimination, the coordinates of each face keypoint in the face image may be referred to as first position information. The value of each face key point in the makeup teaching image information may be referred to as second position information.
Step S222: and constructing a position mapping relation between the second position information and the first position information according to the first position information and the second position information.
The position mapping relationship may be a transformation function f (x) for transforming the second position information x into the first position information y.
Step S221: and determining the target makeup position of the target user according to the makeup position information indicated in the makeup teaching image information and the position mapping relation.
On the basis of obtaining the position mapping relation, the makeup position information indicated in the makeup teaching image information can be used as x1And substituting the transformation function f (x) into the transformation function f (x), and using the calculated result as the target makeup position of the target user.
In one embodiment, in order to make the makeup of the target user closer to the makeup in the makeup teaching image and improve the success rate of the makeup teaching, the makeup of the target user can be corrected. Therefore, as shown in fig. 5, the method provided in the embodiment of the present application may further include the following steps S501 to S503.
Step S501: and acquiring a current face image of the target user.
The steps S501 to S503 may be executed by the client or the server. The real-time face image of the target user can be acquired in real time by a camera of an intelligent terminal where the client is located. The server side can obtain the current face image of the target user from the client side.
Step S502: color values and/or makeup position information of the current makeup are determined based on the current face image.
The color values may be represented by RGB (red green blue) values or CMYK (print color mode) values. Due to the makeup operation of the target user, the color value of the target makeup position can be changed, so that the color value of the target makeup position in the current face image can be obtained in real time.
The makeup position information refers to actual positions of makeup such as lip makeup, eyebrow makeup, and the like in the current face image. For example, the actual location of lip makeup in the current face image may be determined based on the known color values of the lip makeup. And determining the actual position of the eyebrow makeup in the current face image according to the known color value of the eyebrow makeup.
Step S503: and indicating the target user to carry out makeup correction according to the color value and/or the makeup position information of the current makeup.
It should be noted that the process of correcting the makeup of the target user at least includes the following three cases:
determining the color value of the current makeup, and correcting the makeup of the target user based on the color value (such as correcting the shade, tone and the like of the makeup); determining makeup position information of the current makeup, and correcting the makeup of the target user based on the makeup position information (such as correcting the position of the makeup); or, the color value and the makeup position information of the current makeup are determined, and the makeup of the target user is corrected based on the color value and the makeup position information.
In an embodiment, the target user may be instructed to correct the makeup color of the target makeup location according to a difference between the color value of the target makeup location and the color value of the makeup location information.
For example, when the color value of the target makeup position of the current face image is substantially identical to the color value of the makeup position information in the makeup teaching image, it may be considered that the target makeup position of the currently acquired face image is finished. It is possible to calculate a difference between the color value of the target makeup location and the color value of the makeup location information by calculating a difference between the color value of the makeup location information and the color value of the target makeup location. The color value of the makeup location information is greater than the color value of the target makeup location, and the difference is greater than the threshold (i.e., the difference is a positive number, and the difference is greater than the threshold), it may be considered that the student draws too light, on the contrary, if the color value of the makeup location information is less than the color value of the target makeup location, and the difference is greater than the threshold (i.e., the difference is a negative number, and the absolute value of the difference is greater than or equal to the threshold.), it may be considered that the student draws too dark. Thereby playing a role of guiding the target user to correct the makeup of the target makeup location.
In an embodiment, when the color value of the target makeup position of the current face image is substantially consistent with the color value of the makeup position information in the makeup teaching image, it may be considered that the target makeup position of the currently acquired face image is finished, then a next frame of makeup teaching image may be acquired from the face makeup teaching material, and the above-mentioned steps S220-S230 are performed to perform makeup teaching based on the next frame of makeup teaching image, and so on until all the makeup teaching images are acquired, it may be considered that makeup is finished, thereby enabling the trainee to keep up with the makeup speed of the dresser at each step and accurately grasp the makeup lead at each step.
In another embodiment, the target user may be instructed to correct the makeup position information based on the makeup position information and the makeup position information. For example, it is possible to indicate that the eyebrow is high, low, etc., to thereby correct the position of the makeup.
In other embodiments, the target user may be instructed to correct the makeup position information and the color value of the target makeup position according to the makeup position information, the color value of the target makeup position and the color value of the makeup position information. Thereby making it possible to correct the makeup position and the makeup color at the same time.
In an embodiment, when the makeup position information of the current face image and the color value corresponding to the makeup position information are substantially consistent with the color values of the makeup position information and the makeup position information in the makeup teaching image, it may be considered that the target makeup position of the currently acquired face image is finished, then a next frame of makeup teaching image may be acquired from the face makeup teaching material, and the above-mentioned steps S220 to S230 may be executed to perform makeup teaching based on the next frame of makeup teaching image, and so on, until all the makeup teaching images are acquired, it may be considered that makeup is finished, thereby enabling the student to keep up with the makeup speed of the dresser at each step, and accurately grasp the makeup requirement at each step.
Fig. 6 is a schematic diagram of a process for constructing a facial makeup teaching material according to an embodiment of the present application. The process may be performed by a client or a server. As shown in fig. 6, the following steps S601 to S603 are included.
Step S601: and acquiring a makeup teaching video.
The makeup teaching video refers to a face makeup image shot according to a preset frame rate in the process of makeup by a professional makeup operator. The makeup teaching videos of different makeup types can be obtained according to different types of faces of the makeup operators and different types of makeup. Accordingly, makeup teaching materials of different makeup types can be obtained.
Step S602: removing repeated image frames in the makeup teaching video to obtain at least one frame of information of the makeup teaching image;
for example, assuming that there are 5000 frames of facial makeup images in the makeup teaching video, the first frame may be compared with the second frame, and if the color value of each pixel point is substantially the same, the first frame and the second frame may be considered to be repeated, and the first frame or the second frame may be removed. If the second frame is removed, the first frame and the third frame can be compared, if the color value of each pixel point is not changed, the first frame and the third frame can be considered to be repeated, the third frame can be continuously removed, and the like, only one frame can be reserved for the repeated face makeup images, and the repeated face makeup images are arranged according to the time sequence, so that a plurality of face makeup teaching images are obtained.
Step S603: determining makeup information in each frame of makeup teaching image information;
step S604: and generating a face makeup teaching material based on each frame of makeup teaching image information and corresponding makeup information.
In one embodiment, the makeup position information corresponding to each frame of makeup teaching image information can be obtained by comparing the facial color difference between adjacent frames in a plurality of frames of makeup teaching image information and marking the facial pixel points with the facial color difference as the makeup positions corresponding to the next frame of image.
In the remaining images obtained after removing the repeated image frames, the facial color differences (for example, RGB color values are different) of corresponding pixel points between adjacent frames may be compared, and the facial pixel points having facial color differences are marked as makeup positions corresponding to the next frame of image. For example, if the first frame and the second frame in the remaining images are compared and the color value at a certain position a is changed, the position a with the changed color value can be regarded as the makeup position corresponding to the second frame of makeup teaching image. Similarly, the second frame is compared with the third frame, and if the color value of a certain position B changes, the position B with the changed color value can be regarded as the makeup position corresponding to the third frame of makeup teaching image. By analogy, the makeup position information corresponding to each frame of makeup teaching image information can be obtained, and the face makeup teaching material can be obtained. And subsequently, the target user can be guided to carry out the makeup operation based on the facial makeup teaching material.
In order to facilitate understanding of the makeup teaching method provided in the embodiments of the present application, the method provided in the embodiments of the present application will be described below with reference to specific application scenarios.
Optionally, in a specific implementation manner, the method provided in the embodiment of the present application may be applied to a server, and for the application scenario, a possible implementation manner of the embodiment of the present application is shown in fig. 7, and specifically includes the following steps:
step S701, receiving a makeup teaching request sent by a client;
step S702: searching face makeup teaching materials corresponding to the makeup type information from a local face makeup teaching material library according to the makeup type information carried in the makeup teaching request; the face makeup teaching material includes makeup teaching image information indicating makeup position information;
in specific implementation, after the server acquires the face makeup teaching material, the server instructs the client to collect the face image of the target user.
Step S703, acquiring a face image of a target user from a client;
step S704: extracting first position information of face key points of the face image and second position information of the face key points in the makeup teaching image information;
step S705: according to the first position information and the second position information, constructing a position mapping relation between the second position information and the first position information;
step S706: and calculating the target makeup position of the target user according to the makeup position information indicated in the makeup teaching image information and the position mapping relation.
Step S707: and sending the target makeup position of the face image to a client of the target user, and triggering the client of the target user to display the face image marked with the target makeup position.
Step S708: acquiring a current face image of a target user from a client;
step S709: determining a color value of a current makeup based on the current face image;
step S710: and sending prompt information to the client according to the color value of the current makeup and the color value of the makeup position information, and outputting the prompt information by the client to instruct a target user to make up corrections.
Optionally, in another specific implementation, the method provided in the embodiment of the present application may be applied to a client, and for the application scenario, a possible implementation manner of the embodiment of the present application is shown in fig. 8, and specifically includes the following steps:
step S801, responding to a makeup teaching request triggered by the target user selecting the target makeup type;
step S802: sending the makeup type information carried in the makeup teaching request to a server;
step S803: receiving face makeup teaching materials corresponding to the makeup type information returned by the server; the face makeup teaching material includes makeup teaching image information indicating makeup position information;
of course, in some embodiments, the above-mentioned facial makeup teaching materials may also be stored locally at the client, and for this case, after the client receives a makeup teaching request triggered by the target user, the client locally obtains the corresponding facial makeup materials.
Step S804, collecting a face image of a target user;
step S805: extracting first position information of face key points of the face image and second position information of the face key points in the makeup teaching image information;
step S806: according to the first position information and the second position information, constructing a position mapping relation between the second position information and the first position information;
step S807: and calculating the target makeup position of the target user according to the makeup position information indicated in the makeup teaching image information and the position mapping relation.
Step S808: displaying the face image marked with the target makeup position;
step S809: acquiring a current face image of a target user;
step S810: determining a color value of a current makeup based on the current face image;
step S811: and outputting prompt information according to the color value of the current makeup and the color value of the makeup position information, and indicating the target user to make up correction.
When it needs to be described, in the embodiment shown in fig. 8, the step of acquiring the face image of the target user may be performed before step S802, or may be performed after step S803, or may be performed simultaneously with the operations of step S802 and step S803; fig. 8 is only an example of one possible implementation and does not limit the embodiments of the present application.
According to the method provided by the embodiment of the application, the user can select the favorite makeup type to carry out makeup teaching by the client or the server, and the target makeup position of the user can be determined based on the makeup position information indicated by the makeup teaching image information, so that the user is prompted to carry out makeup at the target makeup position, and the accuracy of the makeup position of the user can be improved. When the position or color of the actual makeup is not consistent with the position or color in the makeup teaching image, the user can be prompted to correct the makeup, the accuracy of the makeup is further ensured, and the teaching effect is improved.
The following are embodiments of the apparatus of the present application that may be used to implement one of the embodiments of the method of cosmetic teaching of the present application described above. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method for teaching makeup of the present application.
Fig. 9 is a block diagram of a makeup teaching device according to an embodiment of the present application. As shown in fig. 9, the apparatus includes: a material acquisition module 910, a location determination module 920, and a makeup indication module 930.
The material obtaining module 910 is configured to, in response to a makeup teaching request triggered by a target user, obtain a facial makeup teaching material corresponding to the target user; wherein the face makeup teaching material includes makeup teaching image information indicating makeup information including makeup position information;
a position determining module 920, configured to determine a target makeup position of the target user based on the facial image of the target user and the makeup position information;
a makeup instructing module 930 configured to instruct the target user to perform a corresponding makeup operation at the target makeup location.
In one embodiment, the material obtaining module 910 includes:
an image acquisition unit for acquiring a face image of the target user in response to the makeup teaching request;
a type determining unit for determining makeup type information matched with the target user according to the face image;
and the material determining unit is used for determining the facial makeup teaching material corresponding to the target user according to the makeup type information.
In one embodiment, the makeup teaching request carries makeup type information; correspondingly, the material obtaining module 910 is specifically configured to: and determining the face makeup teaching material corresponding to the target user according to the makeup type information.
In an embodiment, the material determining unit is specifically configured to:
searching a face makeup teaching material corresponding to the makeup type information from a pre-established face makeup teaching material library, and determining the searched face makeup teaching material as the face makeup teaching material corresponding to the target user;
alternatively, the first and second electrodes may be,
searching a face makeup teaching material corresponding to the makeup type information from a pre-established face makeup teaching material library; determining currently existing basic makeup information of the target user based on the facial image of the target user; and based on the current basic makeup information of the target user, eliminating teaching materials corresponding to the basic makeup information in the facial makeup teaching materials to obtain the facial makeup teaching materials corresponding to the target user.
In one embodiment, the position determination module 920 includes:
the position extraction unit is used for extracting first position information of the key points of the face image and second position information of the key points of the face in the makeup teaching image information;
the relationship construction unit is used for constructing a position mapping relationship between the second position information and the first position information according to the first position information and the second position information;
and the position calculating unit is used for determining the target makeup position of the target user according to the makeup position information indicated in the makeup teaching image information and the position mapping relation.
In one embodiment, the makeup indication module 930 is specifically configured to: displaying a face image of the target user marked with the target makeup location to cause the target user to perform a makeup operation at the target makeup location.
In another embodiment, the makeup indication module 930 is specifically configured to: and sending the target makeup position of the face image to a client of the target user, and triggering the client of the target user to display the face image marked with the target makeup position.
In an embodiment, the apparatus further includes:
the current image acquisition module is used for acquiring a current face image of the target user;
a makeup determination module for determining color values and/or makeup position information of the current makeup based on the current face image;
and the makeup correction module is used for indicating the target user to carry out makeup correction according to the color value and/or the makeup position information of the current makeup.
In an embodiment, the apparatus further includes:
the video acquisition module is used for acquiring a makeup teaching video;
the repeated removing module is used for removing repeated image frames in the makeup teaching video to obtain at least one frame of information of the makeup teaching image;
the position marking module is used for determining makeup information in each frame of makeup teaching image information;
and the material generating module is used for generating a face makeup teaching material based on each frame of the makeup teaching image information and the corresponding makeup information.
The implementation processes of the functions and actions of the modules in the device are specifically described in the implementation processes of the corresponding steps in the makeup teaching method, and are not described herein again.
According to the device provided by the application, a user can select the favorite makeup type to carry out makeup teaching by the client or the server, and the target makeup position of the user can be determined based on the makeup position information indicated by the makeup teaching image information, so that the user is prompted to make up at the target makeup position, and the accuracy of the makeup position of the user can be improved. When the position or color of the actual makeup is not consistent with the position or color in the makeup teaching image, the user can be prompted to correct the makeup, the accuracy of the makeup is further ensured, and the teaching effect is improved.
In another embodiment of the present application, there is also provided a computer storage medium having a computer program stored thereon, the computer program, when executed by a computer, performing the steps of the method of the above-described method embodiment.
In another embodiment of the present application, a computer program is also provided, which may be stored on a storage medium in the cloud or in the local. When being executed by a computer or a processor, the computer program is used for executing the corresponding steps of the method of the embodiment of the application and realizing the corresponding modules in the makeup teaching device according to the embodiment of the application.
In the embodiments provided in the present application, the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A cosmetic teaching method, comprising:
responding to a makeup teaching request triggered by a target user, and acquiring a face makeup teaching material corresponding to the target user; wherein the face makeup teaching material includes makeup teaching image information indicating makeup information including makeup position information;
determining a target makeup location of the target user based on the facial image of the target user and the makeup location information;
and instructing the target user to execute a corresponding makeup operation at the target makeup position.
2. The method according to claim 1, wherein the obtaining of the facial makeup teaching material corresponding to the target user in response to the makeup teaching request triggered by the target user comprises:
responding to the makeup teaching request, and acquiring a face image of the target user;
determining makeup type information matched with the target user according to the face image;
and determining the face makeup teaching material corresponding to the target user according to the makeup type information.
3. The method of claim 1, wherein the makeup teaching request carries makeup type information;
correspondingly, the obtaining of the facial makeup teaching material corresponding to the target user in response to the makeup teaching request triggered by the target user includes:
and determining the face makeup teaching material corresponding to the target user according to the makeup type information.
4. The method according to claim 2 or 3, wherein the determining the facial makeup teaching material corresponding to the target user according to the makeup type information comprises:
searching a face makeup teaching material corresponding to the makeup type information from a pre-established face makeup teaching material library, and determining the searched face makeup teaching material as the face makeup teaching material corresponding to the target user;
alternatively, the first and second electrodes may be,
searching a face makeup teaching material corresponding to the makeup type information from a pre-established face makeup teaching material library; determining currently existing basic makeup information of the target user based on the facial image of the target user; and based on the current basic makeup information of the target user, eliminating teaching materials corresponding to the basic makeup information in the facial makeup teaching materials to obtain the facial makeup teaching materials corresponding to the target user.
5. The method of claim 1, wherein determining the target cosmetic position of the target user based on the facial image of the target user and the cosmetic position information comprises:
extracting first position information of face key points of the face image and second position information of the face key points in the makeup teaching image information;
according to the first position information and the second position information, constructing a position mapping relation between the second position information and the first position information;
and determining the target makeup position of the target user according to the makeup position information indicated in the makeup teaching image information and the position mapping relation.
6. The method of claim 1, wherein the instructing the target user to perform a respective makeup operation at the target makeup location comprises:
displaying a face image of the target user marked with the target makeup location to cause the target user to perform a makeup operation at the target makeup location.
7. The method of claim 1, wherein the instructing the target user to perform a respective makeup operation at the target makeup location comprises:
and sending the target makeup position of the face image to a client of the target user, and triggering the client of the target user to display the face image marked with the target makeup position.
8. The method of any one of claims 1-7, wherein after the instructing the target user to perform the respective cosmetic operation at the target cosmetic location, the method further comprises:
acquiring a current face image of the target user;
determining color values and/or makeup position information of a current makeup based on the current face image;
and indicating the target user to carry out makeup correction according to the color value and/or the makeup position information of the current makeup.
9. The method of claim 1, wherein prior to said obtaining facial makeup instructional material corresponding to said target user, said method further comprises:
obtaining a makeup teaching video;
removing repeated image frames in the makeup teaching video to obtain at least one frame of information of the makeup teaching image;
determining makeup information in each frame of makeup teaching image information;
and generating a face makeup teaching material based on each frame of makeup teaching image information and corresponding makeup information.
10. A cosmetic teaching device, comprising:
the material acquisition module is used for responding to a makeup teaching request triggered by a target user and acquiring a facial makeup teaching material corresponding to the target user; wherein the face makeup teaching material includes makeup teaching image information indicating makeup information including makeup position information;
a position determination module for determining a target makeup position of the target user based on the facial image of the target user and the makeup position information;
and the makeup indicating module is used for indicating the target user to execute corresponding makeup operation at the target makeup position.
11. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the cosmetic teaching method of any one of claims 1-9.
12. A computer-readable storage medium, characterized in that the storage medium stores a computer program executable by a processor to perform the method of cosmetic teaching of any one of claims 1-9.
CN202110874705.6A 2021-07-30 2021-07-30 Makeup teaching method and device, electronic equipment and storage medium Pending CN113781271A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110874705.6A CN113781271A (en) 2021-07-30 2021-07-30 Makeup teaching method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110874705.6A CN113781271A (en) 2021-07-30 2021-07-30 Makeup teaching method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113781271A true CN113781271A (en) 2021-12-10

Family

ID=78836619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110874705.6A Pending CN113781271A (en) 2021-07-30 2021-07-30 Makeup teaching method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113781271A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114228647A (en) * 2021-12-20 2022-03-25 浙江吉利控股集团有限公司 Vehicle control method, vehicle terminal and vehicle

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014191813A (en) * 2013-03-28 2014-10-06 Ntt Docomo Inc Making-up action assisting device, making-up action assisting system, and making-up action assisting method
CN104205168A (en) * 2013-02-01 2014-12-10 松下电器产业株式会社 Makeup application assistance device, makeup application assistance method, and makeup application assistance program
KR20160131873A (en) * 2015-05-06 2016-11-16 최명수 Makeup support method of creating and applying makeup guide content for user's face image with realtime
CN108256432A (en) * 2017-12-20 2018-07-06 歌尔股份有限公司 A kind of method and device for instructing makeup
CN108765268A (en) * 2018-05-28 2018-11-06 京东方科技集团股份有限公司 A kind of auxiliary cosmetic method, device and smart mirror
CN109446365A (en) * 2018-08-30 2019-03-08 新我科技(广州)有限公司 A kind of intelligent cosmetic exchange method and storage medium
KR20200002406A (en) * 2018-06-29 2020-01-08 뷰티웍스 주식회사 Make-up Teaching Beauty Service System

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104205168A (en) * 2013-02-01 2014-12-10 松下电器产业株式会社 Makeup application assistance device, makeup application assistance method, and makeup application assistance program
JP2014191813A (en) * 2013-03-28 2014-10-06 Ntt Docomo Inc Making-up action assisting device, making-up action assisting system, and making-up action assisting method
KR20160131873A (en) * 2015-05-06 2016-11-16 최명수 Makeup support method of creating and applying makeup guide content for user's face image with realtime
CN108256432A (en) * 2017-12-20 2018-07-06 歌尔股份有限公司 A kind of method and device for instructing makeup
CN108765268A (en) * 2018-05-28 2018-11-06 京东方科技集团股份有限公司 A kind of auxiliary cosmetic method, device and smart mirror
US20200211245A1 (en) * 2018-05-28 2020-07-02 Boe Technology Group Co., Ltd. Make-up assistance method and apparatus and smart mirror
KR20200002406A (en) * 2018-06-29 2020-01-08 뷰티웍스 주식회사 Make-up Teaching Beauty Service System
CN109446365A (en) * 2018-08-30 2019-03-08 新我科技(广州)有限公司 A kind of intelligent cosmetic exchange method and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MARIE-LENA ECKERT ET AL: "Facial cosmetics database and impact analysis on automatic face recognition", MMSP2013, pages 434 - 439 *
黄妍 等: "一种多通路的分区域快速妆容迁移深度网络", 软件学报, vol. 30, no. 11, pages 3549 - 3566 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114228647A (en) * 2021-12-20 2022-03-25 浙江吉利控股集团有限公司 Vehicle control method, vehicle terminal and vehicle

Similar Documents

Publication Publication Date Title
CN108665492B (en) Dance teaching data processing method and system based on virtual human
CN110785767B (en) Compact linguistics-free facial expression embedding and novel triple training scheme
EP3885965B1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN111291642B (en) Dressing processing method and device, electronic equipment and storage medium
KR102525181B1 (en) System for correcting image and image correcting method thereof
CN108920490A (en) Assist implementation method, device, electronic equipment and the storage medium of makeup
CN105518708A (en) Method and equipment for verifying living human face, and computer program product
CN109635752B (en) Method for positioning key points of human face, method for processing human face image and related device
US20210345016A1 (en) Computer vision based extraction and overlay for instructional augmented reality
JP6929322B2 (en) Data expansion system, data expansion method, and program
CN112116684A (en) Image processing method, device, equipment and computer readable storage medium
US20190045270A1 (en) Intelligent Chatting on Digital Communication Network
CN108932654A (en) A kind of virtually examination adornment guidance method and device
CN111126280B (en) Gesture recognition fusion-based aphasia patient auxiliary rehabilitation training system and method
CN112699857A (en) Living body verification method and device based on human face posture and electronic equipment
RU2671990C1 (en) Method of displaying three-dimensional face of the object and device for it
CN113779289A (en) Drawing step reduction system based on artificial intelligence
CN112669422A (en) Simulated 3D digital human generation method and device, electronic equipment and storage medium
Yu et al. A video-based facial motion tracking and expression recognition system
CN114333046A (en) Dance action scoring method, device, equipment and storage medium
CN113781271A (en) Makeup teaching method and device, electronic equipment and storage medium
CN114779942B (en) Virtual reality immersive interaction system, device and method
CN115205764B (en) Online learning concentration monitoring method, system and medium based on machine vision
KR101734212B1 (en) Facial expression training system
CN115798033A (en) Piano training method, system, equipment and storage medium based on gesture recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination