WO2018227349A1 - 控制方法、控制器、智能镜子和计算机可读存储介质 - Google Patents
控制方法、控制器、智能镜子和计算机可读存储介质 Download PDFInfo
- Publication number
- WO2018227349A1 WO2018227349A1 PCT/CN2017/087979 CN2017087979W WO2018227349A1 WO 2018227349 A1 WO2018227349 A1 WO 2018227349A1 CN 2017087979 W CN2017087979 W CN 2017087979W WO 2018227349 A1 WO2018227349 A1 WO 2018227349A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- current user
- image
- rendering
- smart mirror
- virtual
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 155
- 230000003993 interaction Effects 0.000 claims abstract description 59
- 238000012545 processing Methods 0.000 claims description 139
- 238000009877 rendering Methods 0.000 claims description 137
- 239000000463 material Substances 0.000 claims description 109
- 230000008569 process Effects 0.000 claims description 88
- 230000003796 beauty Effects 0.000 claims description 42
- 238000012360 testing method Methods 0.000 claims description 22
- 230000001815 facial effect Effects 0.000 claims description 19
- 238000012790 confirmation Methods 0.000 claims description 14
- 230000002452 interceptive effect Effects 0.000 claims description 13
- 238000007499 fusion processing Methods 0.000 claims description 9
- 238000004040 coloring Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 6
- 230000002087 whitening effect Effects 0.000 claims description 5
- 210000004709 eyebrow Anatomy 0.000 claims description 4
- 230000034303 cell budding Effects 0.000 claims 1
- 230000003779 hair growth Effects 0.000 claims 1
- 210000003128 head Anatomy 0.000 description 22
- 230000006870 function Effects 0.000 description 20
- 238000010586 diagram Methods 0.000 description 17
- 210000001508 eye Anatomy 0.000 description 17
- 230000000694 effects Effects 0.000 description 11
- 210000001331 nose Anatomy 0.000 description 9
- 238000010200 validation analysis Methods 0.000 description 8
- 210000000887 face Anatomy 0.000 description 7
- 239000011521 glass Substances 0.000 description 5
- 241001465754 Metazoa Species 0.000 description 4
- 230000009471 action Effects 0.000 description 4
- 210000004209 hair Anatomy 0.000 description 4
- 230000037308 hair color Effects 0.000 description 4
- 238000012795 verification Methods 0.000 description 3
- 241001396014 Priacanthus arenatus Species 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000001680 brushing effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 239000002537 cosmetic Substances 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000035784 germination Effects 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 241000282326 Felis catus Species 0.000 description 1
- 208000025174 PANDAS Diseases 0.000 description 1
- 208000021155 Paediatric autoimmune neuropsychiatric disorders associated with streptococcal infection Diseases 0.000 description 1
- 240000000220 Panda oleosa Species 0.000 description 1
- 235000016496 Panda oleosa Nutrition 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 235000021189 garnishes Nutrition 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/002—Specific input/output arrangements not covered by G06F3/01 - G06F3/16
- G06F3/005—Input arrangements through a video camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47G—HOUSEHOLD OR TABLE EQUIPMENT
- A47G1/00—Mirrors; Picture frames or the like, e.g. provided with heating, lighting or ventilating means
- A47G1/02—Mirrors used as equipment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/147—Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/164—Detection; Localisation; Normalisation using holistic features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present invention relates to the field of smart mirrors, and more particularly to a control method, a controller, a smart mirror, and a computer readable storage medium.
- the function of the current smart mirror is mainly information display, such as displaying weather, text messages and other information.
- information display such as displaying weather, text messages and other information.
- the current use of smart mirrors is more limited and the user experience is not good.
- the present invention aims to solve at least one of the technical problems existing in the prior art. To this end, the present invention provides a control method, controller, smart mirror, and computer readable storage medium.
- the control method of an embodiment of the present invention is for controlling a smart mirror.
- the smart mirror includes a camera, and the control method includes:
- a controller of an embodiment of the invention is used to control a smart mirror.
- the smart mirror includes a camera.
- the controller includes a control device, a determination device, a login device, and an interaction device.
- the control device is configured to control the camera to capture a current user;
- the determining device is configured to determine whether the current user is a registered user;
- the login device is configured to control the current when the current user is a registered user.
- the user logs into the smart mirror;
- the interaction device is configured to control the smart mirror to interact with the current user according to the input of the current user and output interaction information.
- the smart mirror of an embodiment of the present invention includes a camera and the above-described controller, and the controller is electrically connected to the camera.
- a smart mirror of an embodiment of the invention includes one or more processors, a memory, and one or more programs.
- the one or more programs are stored in the memory and configured to be executed by the one or more processors, the program including instructions for executing the control method described above.
- a computer readable storage medium in accordance with an embodiment of the present invention includes a computer program for use in conjunction with an electronic device capable of displaying a picture, the computer program being executable by a processor to perform the control method described above.
- control method, the controller, the smart mirror and the computer readable storage medium of the embodiments of the present invention can provide the user with various interactive functions including beauty makeup, cartoon image rendering and the like after the user logs in.
- various interactive functions including beauty makeup, cartoon image rendering and the like after the user logs in.
- the use of the smart mirror is further enriched to meet the needs of the user's smart life and enhance the user experience.
- FIG. 1 is a flow chart of a control method of some embodiments of the present invention.
- FIG. 2 is a block diagram of a smart mirror in accordance with some embodiments of the present invention.
- FIG. 3 is a schematic structural view of a smart mirror according to some embodiments of the present invention.
- FIG. 4 is a flow chart of a control method of some embodiments of the present invention.
- Figure 5 is a block diagram of a determination device of some embodiments of the present invention.
- FIG. 6 is a flow chart of a control method of some embodiments of the present invention.
- FIG. 7 is a block diagram of a controller of some embodiments of the present invention.
- FIG. 8 is a flow chart of a control method of some embodiments of the present invention.
- FIG. 9 is a flow chart of a control method of some embodiments of the present invention.
- FIG. 10 is a block diagram of an interactive device in accordance with some embodiments of the present invention.
- FIG. 11 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
- FIG. 12 is a flow chart of a control method of some embodiments of the present invention.
- Figure 13 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
- FIG. 14 is a flow diagram of a control method of some embodiments of the present invention.
- Figure 15 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
- 16 is a flow chart of a control method of some embodiments of the present invention.
- Figure 17 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
- 18 is a flow chart showing a control method of some embodiments of the present invention.
- FIG. 19 is a block diagram of an interactive device in accordance with some embodiments of the present invention.
- 20 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
- 21 is a flow chart showing a control method of some embodiments of the present invention.
- Figure 22 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
- 23 is a flow chart showing a control method of some embodiments of the present invention.
- Figure 24 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
- 25 is a flow diagram of a control method of some embodiments of the present invention.
- 26 is a block diagram of an interactive device in accordance with some embodiments of the present invention.
- Figure 27 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
- FIG. 28 is a block diagram of a smart mirror in accordance with some embodiments of the present invention.
- a control method of an embodiment of the present invention is used to control the smart mirror 100.
- the smart mirror 100 includes a camera 20.
- the control method includes the following steps:
- S18 Control the smart mirror 100 to interact with the current user according to the input of the current user and output the interaction information.
- control method of the embodiment of the present invention may be implemented by the controller 10 of the embodiment of the present invention.
- the controller 10 of the embodiment of the present invention includes a control device 12, a judging device 14, a login device 16, and an interaction device 18.
- Step S12 can be implemented by control device 12
- step S14 can be implemented by decision device 14
- step S16 can be implemented by login device 16,
- step S18 can be implemented by interaction device 18.
- control device 12 is configured to control the camera 20 to capture the current user; the determining device 14 is configured to determine whether the current user is a registered user; and the login device 16 is configured to control the current user to log in to the smart mirror 100 when the current user is a registered user.
- the interaction device 18 is configured to control the smart mirror 100 to interact with the current user and output the interaction information according to the input of the current user.
- the controller 10 of the embodiment of the present invention is applied to the smart mirror 100 of the embodiment of the present invention. That is, the smart mirror 100 of the embodiment of the present invention includes the controller 10 of the embodiment of the present invention.
- the smart mirror 100 of an embodiment of the present invention further includes a camera 20. The camera 20 and the controller 10 are electrically connected.
- the control method of the embodiment of the present invention can provide the user with various entertainment interaction and guiding functions, such as beauty makeup and cartoon image rendering, after the user successfully logs in.
- the current user using smart mirror 100 needs to successfully log in to smart mirror 100 in order to exercise the right to use various entertainment interactions and guidance functions for smart mirror 100. That is to say, the entertainment interaction and guidance function of the smart mirror 100 can be used if and only if the current user is a registered user. In this way, the personal data and privacy of the registered user can be protected, and the information security used by the smart mirror 100 can be improved.
- each registered user can set a different style of use of the smart mirror 100.
- the smart mirror 100 displays the usage style corresponding to the current registered user, further enhancing the user experience.
- the smart mirror 100 displays the interactive interface.
- the interaction with the smart mirror 100 can be achieved by the current user clicking on the content in the interface.
- control method, the controller 10 and the smart mirror 100 of the embodiments of the present invention can provide various interactive functions for the user after the user successfully logs in.
- use function of the smart mirror 100 is further enriched to meet the needs of the user's smart life and enhance the user experience.
- the smart mirror 100 includes a registration library.
- the registration library includes registration feature information of the registered face area of all registered users.
- Step S14: determining whether the current user is a registered user includes:
- S141 Processing the first image of the current user captured by the camera 20 to obtain the face area of the current user to be tested;
- S142 Processing a face area to be tested to obtain a feature point to be tested of the face area to be tested;
- the determining device 14 includes a first processing unit 141, a second processing unit 142, a third processing unit 143, a comparison unit 144, and a first confirmation unit 145.
- Step S141 may be performed by the first processing order
- the element 141 is implemented, the step S142 can be implemented by the second processing unit 142, the step S143 can be implemented by the third processing unit 143, the step S144 can be implemented by the comparison unit 144, and the step S145 can be implemented by the first confirmation unit 145.
- the first processing unit 141 is configured to process the first image of the current user captured by the camera 20 to acquire the face area to be tested of the current user; the second processing unit 142 is configured to process the face area to be tested to obtain the to-be-tested face area. Measuring the feature point to be tested in the face region; the third processing unit 143 is configured to process the feature point to be tested to extract feature information of the face region to be tested; and the comparison unit 144 is configured to compare the feature information to be tested with the registration feature information. Pairing to obtain a comparison result; the first confirming unit 145 is configured to confirm that the current user is a registered user when the comparison result is greater than a predetermined threshold.
- the feature points to be tested include feature points such as an eye, a nose, a mouth, and a facial contour of the face region to be tested.
- the registration feature information or the feature information to be tested includes feature information of the face of the registered user or the current user, such as the relative position and distance of the eyes, the nose, the mouth, and the like, the viewpoint, the size of the eyes, the nose, the mouth, and the like. Comparing the feature information of the current user with the registration feature information of the registered user. When the comparison result is greater than the predetermined threshold, the matching degree between the current user and the registered user's face is high, so that the current user is the registered user. . After confirming that the current user is a registered user, the current user successfully logs in to the smart mirror 100.
- the smart mirror 100 provides rich usage functions only for registered users, and ensures the information security of registered users.
- the registered user can set the usage style of the smart mirror 100, such as the color of the display interface, the background pattern, and the like. In this way, after the current user successfully logs in to the smart mirror 100, the smart mirror 100 can display the usage style that the current user likes, further enhancing the user experience.
- the login verification of the current user is verified by face recognition.
- the current user's login verification can also be verified by voice recognition, fingerprint recognition, iris recognition, and the like.
- control method of the embodiment of the present invention further includes:
- S112 Establish a personal record file of the registered user according to the input of the registered user.
- the controller 10 further includes an establishing device 11.
- Step S111 can be implemented by the control device 12, which can be implemented by the establishing device 11.
- control device 12 is further configured to control the camera 20 to capture a registered user; the establishing device 12 is configured to establish a personal record file of the registered user according to the input of the registered user.
- the smart mirror 100 processes the image of the registered registered user to acquire the registered feature point of the registered user, and stores the registered feature point in the registration library for subsequent identification and matching login.
- Registered users can make edit inputs on the smart mirror 100 to create their own personal record files.
- the personal record file includes the nickname, avatar, and personal signature of the registered user. Registered users can also create their own cartoon characters and store them in personal record files.
- the smart mirror 100 displays all the information in the current user's personal record file or displays part of the information in the current user's personal record file.
- the current user may select to save the output interactive information.
- the saved interactive information is also stored in the personal record file. Users can view their saved interactive information and/or historical interactive content through personal record files. In this way, the user experience can be further improved.
- control method of the embodiment of the present invention includes:
- S172 Control the smart mirror 100 to display the second image.
- step S171 and step S172 can be implemented by control device 12.
- control device 12 is also used to:
- the smart mirror 100 is controlled to display a second image.
- the first image captured by the camera 20 is used for login verification of face recognition.
- the second image taken by the camera 20 is used for the interaction of the current user with the smart mirror 100.
- the interacting includes performing a coloring process on the second image.
- the smart mirror includes a library of cute materials.
- Step S18: controlling the smart mirror 100 to interact with the current user according to the input of the current user and outputting the interaction information includes:
- S1812 processing the face of the face of the face to obtain the feature points of the face of the face;
- S1814 Perform matching fusion processing on the blank material and the second image according to the feature point to obtain a cute image.
- the interaction device 18 includes a second validation unit 181 and a fourth processing unit 182.
- Step S1811 may be implemented by the first processing unit 141
- step S1812 may be implemented by the second processing unit 142
- step S1813 may be implemented by the second validation unit 181
- step S1814 may be implemented by the fourth processing unit 182.
- the first processing unit 141 is further configured to process the second image to obtain the facial face region of the current user; the second processing unit 142 is further configured to process the facial face region to obtain the facial face region.
- the second confirmation unit 181 is configured to determine the hair color material according to the current user input, and the fourth processing unit 182 is configured to perform matching fusion processing on the hair color material and the second image according to the coloring feature point to obtain the hair color image. .
- the feature points include characteristic points such as eyes, nose, mouth, ears, and hair.
- the germination process is to superimpose the effect of the garnish on the face of the current user's face, such as the superimposed cute expression on the face, the superimposed hairpin on the face, and the superimposition of the head.
- the animal's ears and nose are superimposed on the animal's nose, and the cheeks are superimposed on the animal's beard.
- Mengyan material can be specified by the user.
- the smart mirror 100 forms an interesting animation effect by adopting a dynamic frame-by-frame display of the image.
- Step S18: controlling the smart mirror 100 to interact with the current user according to the input of the current user and outputting the interaction information includes:
- S1823 Perform a beauty treatment on the second image according to the current user input and the beauty feature point to obtain a beauty image.
- the interaction device 18 includes a fourth processing unit 182.
- Step S1821 may be implemented by the first processing unit 141
- step S1822 may be implemented by the second processing unit 142
- step S183 may be implemented by the fourth processing unit 182.
- the first processing unit 141 is further configured to process the second image to obtain the facial face region of the current user; the second processing unit 142 is further configured to process the facial face region to obtain the facial face region.
- the beauty feature point is used; the fourth processing unit 182 is configured to perform a beauty treatment on the second image according to the current user input and the beauty feature point to obtain the beauty image.
- the cosmetic treatment includes one or more of a whitening filter, a rosy filter, a face-lifting module, and a large-eye module. Beauty features Points include faces, eyes, and the like.
- the user can perform the beauty treatment of the second image by clicking the operation option of the beauty treatment. For example, as shown in FIG. 13, after the user clicks on the whitening filter, the fourth processing unit 182 performs facial whitening processing on the facial face region in the second image. In this way, the user can select the beauty function to perform the beauty treatment on the second image, and the processed beauty image is displayed on the smart mirror 100. The user can view the personal image displayed in the smart mirror 100 and enhance the user's viewing experience.
- the smart mirror 20 displays the beauty image frame by frame in the form of a dynamic frame. That is to say, the camera 20 will capture the current user in real time to obtain the current feature points of the current user, and perform beauty treatment on the images obtained in real time. In this way, even if the current user is in a motion state, such as a certain angle of the head rotation, the smart mirror 100 displays the image of the current user's beauty in real time.
- the interacting includes performing a virtual makeup test on the second image.
- Smart mirrors include a library of makeup materials.
- Step S18: controlling the smart mirror 100 to interact with the current user according to the input of the current user and outputting the interaction information includes:
- the interaction device 18 includes a second validation unit 181 and a fourth processing unit 182.
- Step S1831 may be implemented by the first processing unit 141
- step S1832 may be implemented by the second processing unit 142
- step S1833 may be implemented by the second validation unit 181
- step S1834 may be implemented by the fourth processing unit 184.
- the first processing unit 141 is further configured to process the second image to obtain the current user's makeup face area; the second processing unit 142 is further configured to process the makeup face area to obtain the makeup face area.
- a makeup test feature the second confirmation unit 181 is configured to determine a makeup material according to the input of the current user; and the fourth processing unit 182 is configured to perform a matching fusion process on the makeup material and the second image according to the makeup feature point to obtain a virtual makeup image.
- Make-up materials include one or more of eye shadow material, eyeliner material, blush material, lip gloss material and eyebrow material.
- the makeup features include feature points such as eyes, nose, eyebrows, and cheeks.
- the fourth processing unit 182 performs matching and fusion processing on the makeup material selected by the current user and the second image according to the determined makeup feature point.
- the smart mirror 100 displays a virtual makeup image that matches the blending process. For example, as shown in FIG. 15, after the current user clicks on the eye shadow material and the lip gloss material, the smart mirror 100 displays the processed virtual makeup image.
- the current user can refer to the virtual makeup image displayed in the smart mirror 100 to determine the makeup that he or she likes when making makeup. In this way, the interaction between the user and the smart mirror 100 is enhanced, and the user experience is improved.
- the smart mirror 100 displays the virtual makeup image frame by frame in the form of a dynamic frame. Even if the current user is in motion, the smart mirror 100 can continue to display the virtual makeup image after the virtual makeup treatment.
- the interacting includes performing a 2D mask rendering process on the second image.
- the smart mirror 100 includes a 2D mask material library.
- Step S18: controlling the smart mirror 100 to interact with the current user according to the input of the current user and outputting the interaction information includes:
- S1842 processing the 2D mask to render the face region to obtain a 2D mask rendering feature point of the 2D mask rendering face region;
- S1844 Perform matching fusion processing on the 2D mask material and the second image according to the 2D mask rendering feature point to obtain a 2D mask rendering image.
- the interaction device 18 includes a second validation unit 181 and a fourth processing unit 182.
- Step S1841 may be implemented by the first processing unit 141
- step S1842 may be implemented by the second processing unit 142
- step S1843 may be implemented by the second validation unit 181
- step S1844 may be implemented by the fourth processing unit 182.
- the first processing unit 141 is further configured to process the second image to obtain the 2D mask rendering face area of the current user; the second processing unit 142 is further configured to process the 2D mask rendering face area to obtain the 2D mask rendering person.
- the 2D mask of the face region renders the feature point;
- the second confirming unit 181 is configured to determine the 2D mask material according to the input of the current user;
- the fourth confirming unit 182 is configured to match the 2D mask material with the second image according to the 2D mask rendering feature point. Process to get a 2D mask to render the image.
- the 2D mask rendering feature points mainly include an eye, a nose, and a mouth.
- 2D mask material includes classic white mask, Peking Opera mask, animal face, cartoon image mask and so on.
- the fourth processing unit 182 performs a matching fusion process on the 2D mask material and the second image.
- the smart mirror 100 will display a 2D mask rendered image that matches the blending process. As shown in FIG. 17, the user clicks on the 2D mask material of the white mask, and the smart mirror 100 displays the processed 2D mask rendered image. In this way, the user can intuitively feel the effect of wearing a mask. Increase the fun of the use of the smart mirror 100.
- the smart mirror 100 displays the 2D mask rendered image frame by frame in the form of a dynamic frame.
- the 2D mask can still match the 2D mask rendering face area. Users can dynamically view the rendering effects.
- the smart mirror 100 can provide the user with a feeling of using the mask in the mirror.
- the interacting includes performing a 3D cartoon rendering process on the second image.
- the smart mirror 100 includes a 3D engine, a general 3D face model, and a 3D cartoon material library.
- Step S18: controlling the smart mirror 100 to interact with the current user according to the input of the current user and outputting the interaction information includes:
- S1851 processing the second image to obtain a 3D cartoon image rendering face area of the current user
- S1852 processing a 3D cartoon image to render a face region to obtain a 3D cartoon image rendering feature point of a 3D cartoon image rendering face region;
- S1853 Acquire a 3D cartoon image rendering posture parameter of the current user according to the general 3D face model and the 3D cartoon image rendering feature point;
- control 3D engine performs a 3D cartoon image rendering process on the second image according to the first posture parameter and the 3D cartoon image material.
- the interaction device 18 includes an acquisition unit 183, a second confirmation unit 181, and a fourth processing unit 182.
- Step S1851 may be implemented by the first processing unit 141
- step S1852 may be implemented by the second processing unit 142
- step S1853 may be implemented by the obtaining unit 183
- step S1854 may be implemented by the second confirming unit 181
- step S1855 may be implemented by the fourth processing unit. 182 implementation.
- the first obtaining unit 141 is further configured to process the second image to obtain a 3D cartoon image of the current user.
- the second processing unit 142 is further configured to process the 3D cartoon image to render the face region to obtain a 3D cartoon image rendering feature point of the 3D cartoon image rendering face region;
- the obtaining unit 183 is configured to use the 3D face model and the 3D face model
- the cartoon image rendering feature point acquires the current user's 3D cartoon image rendering gesture parameter;
- the second confirmation unit 181 is configured to determine the 3D cartoon image material according to the current user's input;
- the fourth processing unit 182 is configured to control the 3D engine according to the first posture parameter and
- the 3D cartoon image material performs a 3D cartoon image rendering process on the second image.
- performing 3D cartoon image rendering processing on the second image refers to acquiring an action of a character in the second image, and controlling the 3D cartoon image to imitate the motion of the character.
- the 3D cartoon image material library includes a variety of 3D cartoon image materials, such as SpongeBob SquarePants, Jingle Cats, Kung Fu Pandas, and Winnie the Pooh.
- the 3D cartoon image rendering feature points include a 3D cartoon image to render the eyes, nose, mouth, head, etc. of the face area.
- the first posture parameter includes a deflection angle of the head, a closure of the eye, an action of the mouth, and the like.
- the matching of the general 3D face model and the 3D cartoon image rendering feature point is used to convert the 2D planar image captured by the camera 20 into a 3D stereoscopic attitude parameter, that is, a first posture parameter.
- the 3D engine can perform the 3D cartoon image rendering process on the second image according to the first posture parameter and the 3D cartoon material, so as to realize the 3D cartoon image according to the current user's head and face motion.
- 3D stereoscopic effect of character action As shown in FIG. 20, the 3D cartoon material selected by the current user is a jingle cat. When the user makes a big eye and makes a big laugh, the cat simultaneously performs a big eye and laughs at the same time to realize real-time imitation following. As such, the user interaction with the smart mirror 100 is greatly enhanced.
- the obtained 3D cartoon image rendering feature points may also be matched with a general 3D face model in the general 3D face model library to obtain 3D stereoscopic gesture parameters.
- the general 3D face model library stores common 3D faces of different shapes. In this way, different universal 3D face models can be selected according to the differences of different users' heads, faces, facial features, etc., thereby improving the accuracy of 3D stereoscopic gesture parameters to further optimize the 3D cartoon image rendering effect, so that 3D The imitation of the cartoon image follows more accurately.
- the interacting includes performing a virtual glasses rendering process on the second image.
- the smart mirror 100 includes a 3D engine, a universal 3D face model, and a virtual glasses material library.
- Step S18: controlling the smart mirror 100 to interact with the current user according to the input of the current user and outputting the interaction information includes:
- S1861 processing the second image to obtain a virtual face of the current user to render the second face region
- S1862 Processing the virtual glasses to render the face region to obtain the virtual glasses rendering feature points of the virtual glasses rendering face region;
- S1863 Acquire a virtual glasses rendering posture parameter of the current user according to the general 3D face model and the virtual glasses rendering feature points;
- control 3D engine performs virtual glasses rendering processing on the second image according to the second posture parameter and the virtual glasses material.
- the interaction device 18 includes an acquisition unit 183, a second validation unit 181, and a fourth processing unit 182.
- Step S1861 may be implemented by the first processing unit 141
- step S1862 may be implemented by the second processing unit 142
- step S1863 may be implemented by the obtaining unit 183
- step S1864 may be implemented by the second confirming unit 181
- step S1865 may be performed by the fourth processing unit. 182 implementation.
- the first processing unit 141 is further configured to process the second image to obtain the virtual glasses of the current user to render the second face region; the second processing unit 142 is further configured to process the virtual glasses to render the face region to obtain the virtual glasses.
- Rendering a virtual glasses rendering feature point of the face region; the obtaining unit 183 is configured to render according to the general 3D face model and the virtual glasses The feature point acquires the virtual glasses rendering pose parameter of the current user; the second confirming unit 181 is configured to determine the virtual glasses material according to the input of the current user; the fourth processing unit 182 is configured to control the 3D engine according to the second posture parameter and the virtual glasses material pair The two images are subjected to virtual glasses rendering processing.
- performing virtual glasses rendering processing on the second image refers to wearing virtual glasses for the characters in the second image, and the virtual glasses can be moved with the action of the head of the person in the second image to implement imitation following.
- the virtual glasses material library includes a variety of virtual glasses materials of different colors and shapes.
- the virtual glasses rendering feature points mainly include virtual glasses rendering the head and eye portions of the face region.
- the virtual glasses render attitude parameters include the movement of the head and eyes.
- the matching of the universal 3D face model and the virtual glasses rendering feature points is used to convert the 2D planar image captured by the camera 20 into a 3D stereoscopic attitude parameter, ie, a second attitude parameter.
- the 3D engine can perform virtual glasses rendering processing on the second image according to the second posture parameter and the virtual glasses material. Then the current user sees the 3D stereoscopic display effect after wearing the glasses in the smart mirror 100.
- the virtual glasses can also move in real time with the head, thereby achieving an exact match between the virtual glasses and the eye portion.
- the virtual glasses material selected by the user is thick frame black glasses, and the smart mirror 100 displays an image after the user wears the thick frame black glasses.
- the thick frame black glasses also accurately match the user's eyes when the user's head is rotated.
- the user can refer to the style of the virtual glasses rendering process to select the style of the glasses that are suitable for the wearer, and further increase the use function and practicability of the smart mirror 100.
- the fun of the smart mirror 100 can be increased.
- the obtained virtual glasses rendering feature points can also be matched with a common 3D face model in the general 3D face model library to obtain 3D stereoscopic pose parameters.
- the general 3D face model library stores common 3D faces of different shapes. In this way, different universal 3D face models can be selected according to the differences of different users' heads, faces, facial features, etc., thereby improving the accuracy of 3D stereoscopic posture parameters to further optimize the effect of virtual glasses rendering, so that virtual glasses Matching with the user's eyes is more precise.
- the interaction includes performing a virtual hairstyle rendering process on the second image
- the smart mirror 100 includes a 3D engine, a general 3D face model, and a virtual hairstyle material library.
- Step S18: controlling the smart mirror 100 to interact with the current user according to the input of the current user and outputting the interaction information includes:
- S1872 processing a virtual hairstyle to render a face region to obtain a virtual hairstyle rendering feature point of the virtual hairstyle rendering face region;
- S1873 Acquire a third posture parameter of the current user according to the 3D face model and the virtual hairstyle rendering feature point;
- control 3D engine performs a virtual hairstyle rendering process on the second image according to the third posture parameter and the virtual hairstyle material.
- the interaction device 18 includes an acquisition unit 183, a second validation unit 181, and a fourth processing unit 182.
- Step S1871 can be implemented by the first processing unit 141
- step S1872 can be implemented by the second processing unit 142
- step S1873 can be implemented by the obtaining unit 183
- step S1874 can be implemented by the second confirming unit 181
- step S1875 can be implemented by the fourth processing unit 182 implementation.
- the first processing unit 141 is further configured to process the second image to obtain the virtual hairstyle rendering face area of the current user; the second processing unit 142 is further configured to process the virtual hairstyle rendering face area to obtain the virtual hairstyle rendering person.
- the virtual hairstyle of the face region is used to render the feature point;
- the obtaining unit 184 is configured to acquire the third gesture parameter of the current user according to the 3D face model and the virtual hairstyle rendering feature point;
- the second confirming unit 181 is configured to determine the virtual hairstyle element according to the input of the current user.
- the fourth processing unit 182 is configured to control the 3D engine to perform a virtual hairstyle rendering process on the second image according to the third posture parameter and the virtual hairstyle material.
- performing virtual hairstyle rendering processing on the second image refers to performing virtual hairstyle wearing on the characters in the second image, and the virtual hairstyle can be operated in accordance with the motion of the person's head.
- the virtual hairstyle material library includes a variety of virtual hairstyle materials of different shapes and colors.
- the virtual hairstyle rendering feature point mainly includes the head portion of the current user.
- the virtual hairstyle rendering pose parameter includes the motion of the head.
- the matching of the universal 3D face model and the virtual hairstyle rendering feature points is used to convert the 2D plane image captured by the camera 20 into a 3D stereo attitude parameter, that is, a third posture parameter.
- the 3D engine can perform the virtual hairstyle rendering process on the second image according to the third posture parameter and the virtual hairstyle material.
- the current user can then see the 3D stereoscopic effect of the virtual hairstyle being tried in the smart mirror 100.
- the virtual hairstyle can also move in real time with the head, thereby achieving an exact match between the virtual hairstyle and the head.
- the virtual hairstyle material selected by the current user is a short hair
- the smart mirror 100 displays the image after the user wears the short hair. When the user's head is rotated, the short hair can also exactly match the user's head. In this way, the user can refer to the effect of the virtual hairstyle rendering process to select a suitable hairstyle style, and at the same time increase the utility and fun of the smart mirror 100.
- the obtained virtual hairstyle rendering feature points can also be matched with a general 3D face model in the general 3D face model library to obtain 3D stereo pose parameters.
- the general 3D face model library stores common 3D faces of different shapes. In this way, different universal 3D face models can be selected according to the differences of different users' heads, faces, facial features, etc., thereby improving the accuracy of the 3D stereoscopic posture parameters to further optimize the virtual hairstyle rendering effect and making the virtual hairstyle Matching with the user's head is more precise.
- the interaction further includes providing daily life care guidance to the current user.
- Step S18: controlling the smart mirror 100 to interact with the current user according to the input of the current user and outputting the interaction information includes:
- S188 Provide daily life care guidance for the current user according to the input of the user.
- the interaction device 18 further includes a coaching unit 185.
- Step S188 can be implemented by the coaching unit 185.
- the guiding unit 185 is configured to provide daily life care guidance for the current user according to the user's input.
- daily life care guidance includes teaching the user how to properly brush their teeth, wash their face properly, perform facial massage, and the like.
- the smart mirror 100 displays the daily care guidance content of the brushing in the form of a video or a picture. In this way, the practicality of the smart mirror 100 is increased.
- the second confirmation unit 181 in the germination process, the virtual makeup test, the 2D mask rendering process, the 3D cartoon image rendering process, the virtual glasses rendering process, and the virtual hairstyle rendering process are the same. That is, the second confirmation unit may perform the contents of step S1813, step S1833, step S1843, step S1854, step S1865, and/or step S1874.
- the fourth processing unit 182 in the hair coloring process, the beauty process, the virtual makeup test, the 2D mask rendering process, the 3D cartoon image rendering process, the virtual glasses rendering process, and the virtual hairstyle rendering process are the same. That is, the fourth processing unit 182 can perform the contents of step S1814, step S1823, step S1834, step S1844, step S1855, step S1865, and/or step S1875.
- control method, the controller 10 and the smart mirror 100 of the embodiments of the present invention can perform the coloring process, the virtual makeup test, the 2D mask rendering process, the 3D cartoon image rendering process, the virtual glasses rendering process, or the virtual hairstyle rendering process simultaneously or sequentially.
- the controller 10 can perform the coloring process, the beauty process, and the 3D cartoon image rendering process on the second image at the same time.
- the controller 10 can also perform the beauty treatment, the virtual makeup test, and the virtual glasses on the second image.
- the rendering process and the virtual hairstyle rendering process are sequential image processing. In some embodiments, the processing order of each image processing mode can be changed at will.
- a smart mirror 100 of an embodiment of the present invention includes one or more processors 30, a memory 40, and one or more programs 41.
- one or more programs 41 are stored in the memory 40 and configured to be executed by one or more processors 30.
- the program 41 includes instructions for executing the control method of any of the above embodiments.
- program 41 includes instructions for performing the following steps:
- S18 Control the smart mirror 100 to interact with the current user according to the input of the current user and output the interaction information.
- a computer readable storage medium in accordance with an embodiment of the present invention includes a computer program for use in conjunction with an electronic device capable of displaying a picture.
- the computer program can be executed by the processor to perform the control methods described in any of the above described real-time modes.
- a processor can be used to perform the following steps:
- S18 Control the smart mirror 100 to interact with the current user according to the input of the current user and output the interaction information.
- control method, the controller 10, the smart mirror 100, and the computer readable storage medium of the embodiments of the present invention can provide the registered user with the appearance of the face processing, the beauty treatment, the virtual makeup test, the 2D mask rendering process, and the 3D.
- the use function of the smart mirror 100 is increased, the utility is higher, and the fun of the smart mirror 100 and the user experience are also improved.
- first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
- features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
- the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
- An arranging list of executable instructions of logical functions may be embodied in any computer readable medium for use in an instruction execution system, apparatus, or device (eg, a computer-based system, a system including a processor, or other A system, device, or device that takes instructions and executes instructions for use, or in conjunction with such instructions to execute a system, apparatus, or device.
- a "computer-readable medium" can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
- computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
- the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
- portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
- multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
- a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
- each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
- the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
- the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
- the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Economics (AREA)
- Computer Graphics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (31)
- 一种控制方法,用于控制智能镜子,其特征在于,所述智能镜子包括摄像头,所述控制方法包括:控制所述摄像头拍摄当前用户;判断所述当前用户是否为注册用户;在所述当前用户为注册用户时,控制所述当前用户登录所述智能镜子;和根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息。
- 根据权利要求1所述的控制方法,其特征在于,所述智能镜子包括注册库,所述注册库包括所有所述注册用户的注册人脸区域的注册特征信息;所述判断所述当前用户是否为注册用户的步骤包括:处理所述摄像头拍摄的所述当前用户的第一图像以获取所述当前用户的待测人脸区域;处理所述待测人脸区域以获取所述待测人脸区域的待测特征点;处理所述待测特征点以提取所述待测人脸区域的特征信息;将所述待测特征信息与所述注册特征信息进行比对以得到比对结果;和在所述比对结果大于预定阈值时确认所述当前用户为注册用户。
- 根据权利要求2所述的控制方法,其特征在于,所述控制方法包括:控制所述摄像头拍摄所述当前用户的第二图像;和控制所述智能镜子显示所述第二图像。
- 根据权利要求3所述的控制方法,其特征在于,所述交互包括对所述第二图像进行萌颜处理,所述智能镜子包括萌颜素材库;所述根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息的步骤包括:处理所述第二图像以获取所述当前用户的萌颜人脸区域;处理所述萌颜人脸区域以获取所述萌颜人脸区域的萌颜特征点;根据所述当前用户输入确定萌颜素材;和根据所述萌颜特征点将所述萌颜素材与所述第二图像进行匹配融合处理以得到萌颜图像。
- 根据权利要求3所述的控制方法,其特征在于,所述交互包括对所述第二图像进行美颜处理,所述根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息的步骤包括:处理所述第二图像以获取所述当前用户的美颜人脸区域;处理所述美颜人脸区域以获取所述美颜人脸区域的美颜特征点;和根据所述当前用户的输入和所述美颜特征点对所述第二图像进行美颜处理以得到美颜图像。
- 根据权利要求5所述的控制方法,其特征在于,所述美颜处理包括美白滤镜、红润滤镜、瘦脸模块和大眼模块中的一种或几种。
- 根据权利要求3所述的控制方法,其特征在于,所述交互包括对所述第二图像进行虚拟试妆处理,所述智能镜子包括妆容素材库;所述根据所述当前用户的输入控 制所述智能镜子与所述当前用户产生交互并输出交互信息的步骤包括:处理所述第二图像以获取所述当前用户的试妆人脸区域;处理所述试妆人脸区域以获取所述试妆人脸区域的试妆特征点;根据所述当前用户的输入确定妆容素材;和根据所述试妆特征点将所述妆容素材与所述第二图像进行匹配融合处理以得到虚拟试妆图像。
- 根据权利要求7所述的控制方法,其特征在于,所述妆容素材包括眼影素材、眼线素材、腮红素材、唇彩素材和眉毛素材中的一种或几种。
- 根据权利要求3所述的控制方法,其特征在于,所述交互包括对所述第二图像进行2D面具渲染处理,所述智能镜子包括2D面具素材库;所述根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息的步骤包括:处理所述第二图像以获取所述当前用户的2D面具渲染人脸区域;处理所述2D面具渲染人脸区域以获取所述2D面具渲染人脸区域的2D面具渲染特征点;根据所述当前用户的输入确定2D面具素材;和根据所述2D面具渲染特征点将所述2D面具素材与所述第二图像进行匹配融合处理以得到2D面具渲染图像。
- 根据权利要求3所述的控制方法,其特征在于,所述交互包括对所述第二图像进行3D卡通形象渲染处理,所述智能镜子包括3D引擎、通用3D人脸模型和3D卡通形象素材库;所述根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息的步骤包括:处理所述第二图像以获取所述当前用户的3D卡通形象渲染人脸区域;处理所述3D卡通形象渲染人脸区域以获取所述3D卡通形象渲染人脸区域的3D卡通形象渲染特征点;根据所述通用3D人脸模型和所述3D卡通形象渲染特征点获取所述当前用户的第一姿态参数;根据所述当前用户的输入确定3D卡通形象素材;和控制所述3D引擎根据所述第一姿态参数和所述3D卡通形象素材对所述第二图像进行3D卡通形象渲染处理。
- 根据权利要求3所述的控制方法,其特征在于,所述交互包括对所述第二图像进行虚拟眼镜渲染处理,所述智能镜子包括3D引擎、通用3D人脸模型和虚拟眼镜素材库;所述根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息的步骤包括:处理所述第二图像以获取所述当前用户的虚拟眼镜渲染人脸区域;处理所述虚拟眼镜渲染人脸区域以获取所述虚拟眼镜渲染人脸区域的虚拟眼镜渲染特征点;根据所述通用3D人脸模型和所述虚拟眼镜渲染特征点获取所述当前用户的第二姿态参数;根据所述当前用户的输入确定虚拟眼镜素材;和控制所述3D引擎根据所述第二姿态参数和所述虚拟眼镜素材对所述第二图像进行虚拟眼镜渲染处理。
- 根据权利要求3所述的控制方法,其特征在于,所述交互包括对所述第二图像进行虚拟发型渲染处理,所述智能镜子包括3D引擎、通用3D人脸模型和虚拟发型素材库;所述根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息的步骤包括:处理所述第二图像以获取所述当前用户的虚拟发型渲染人脸区域;处理所述虚拟发型渲染人脸区域以获取所述虚拟发型渲染人脸区域的虚拟发型渲染特征点;根据所述通用3D人脸模型和所述虚拟发型渲染特征点获取所述当前用户的第三姿态参数;根据所述当前用户的输入确定虚拟发型素材;和控制所述3D引擎根据所述第三姿态参数和所述虚拟发型素材对所述第二图像进行虚拟发型渲染处理。
- 根据权利要求1所述的控制方法,其特征在于,所述交互包括对所述当前用户提供日常生活护理指导,所述根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息的步骤包括:根据所述用户的输入为所述当前用户提供日常生活护理指导。
- 根据权利要求1所述的控制方法,其特征在于,所述控制方法还包括:控制所述摄像头拍摄所述注册用户;和根据所述注册用户的输入建立所述注册用户的个人记录档案。
- 一种控制器,用于控制智能镜子,其特征在于,所述智能镜子包括摄像头,所述控制器包括:控制装置,所述控制装置用于控制所述摄像头拍摄当前用户;判断装置,所述判断装置用于判断所述当前用户是否为注册用户;登录装置,所述登录装置用于在所述当前用户为注册用户时,控制所述当前用户登录所述智能镜子;和交互装置,所述交互装置用于根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息。
- 根据权利要求15所述的控制器,其特征在于,所述智能镜子包括注册库,所述注册库包括所有所述注册用户的注册人脸区域的注册特征信息;所述判断装置包括:第一处理单元,所述第一处理单元用于处理所述摄像头拍摄的所述当前用户的第一图像以获取所述当前用户的待测人脸区域;第二处理单元,所述第二处理单元用于处理所述待测人脸区域以获取所述待测人脸区域的待测特征点;第三处理单元,所述第三处理单元用于处理所述待测特征点以提取所述待测人脸区域的特征信息;比对单元,所述比对单元用于将所述待测特征信息与所述注册特征信息进行比对以得到比对结果;和第一确认单元,所述第一确认单元用于在所述比对结果大于预定阈值时确认所述当前用户为注册用户。
- 根据权利要求16所述的控制器,其特征在于,所述控制装置还用于:控制所述摄像头拍摄所述当前用户的第二图像;和控制所述智能镜子显示所述第二图像。
- 根据权利要求17所述的控制器,其特征在于,所述交互包括对所述第二图像进行萌颜处理,所述智能镜子包括萌颜素材库;所述第一处理单元还用于处理所述第二图像以获取所述当前用户的萌颜人脸区域;所述第二处理单元还用于处理所述萌颜人脸区域以获取所述萌颜人脸区域的萌颜特征点;所述交互装置包括:第二确认单元,所述第二确认单元用于根据所述当前用户输入确定萌颜素材;和第四处理单元,所述第四处理单元用于根据所述萌颜特征点将所述萌颜素材与所述第二图像进行匹配融合处理以得到萌颜图像。
- 根据权利要求17所述的控制器,其特征在于,所述交互包括对所述第二图像进行美颜处理;所述第一处理单元还用于处理所述第二图像以获取所述当前用户的美颜人脸区域;所述第二处理单元还用于处理所述美颜人脸区域以获取所述美颜人脸区域的美颜特征点;所述交互装置包括:第四处理单元,所述第四处理单元用于根据所述当前用户的输入和所述美颜特征点对所述第二图像进行美颜处理以得到美颜图像。
- 根据权利要求19所述的控制器,其特征在于,所述美颜处理包括美白滤镜、红润滤镜、瘦脸模块和大眼模块中的一种或几种。
- 根据权利要求17所述的控制器,其特征在于,所述交互包括对所述第二图像进行虚拟试妆处理,所述智能镜子包括妆容素材库;所述第一处理单元还用于处理所述第二图像以获取所述当前用户的试妆人脸区域;所述第二处理单元还用于处理所述试妆人脸区域以获取所述试妆人脸区域的试妆特征点;所述交互装置包括:第二确认单元,所述第二确认单元用于根据所述当前用户的输入确定妆容素材;和第四处理单元,所述第四处理单元用于根据所述试妆特征点将所述妆容素材与所述第二图像进行匹配融合处理以得到虚拟试妆图像。
- 根据权利要求21所述的控制器,其特征在于,所述妆容素材包括眼影素材、眼线素材、腮红素材、唇彩素材和眉毛素材中的一种或几种。
- 根据权利要求17所述的控制器,其特征在于,所述交互包括对所述第二图像进行2D面具渲染处理,所述智能镜子包括2D面具素材库;所述第一处理单元还用于处理所述第二图像以获取所述当前用户的2D面具渲染人脸区域;所述第二处理单元还用于处理所述2D面具渲染人脸区域以获取所述2D面具渲染人脸区域的2D面具渲染特征点;所述交互装置包括:第二确认单元,所述第二确认单元用于根据所述当前用户的输入确定2D面具素材;和第四处理单元,所述第四处理单元用于根据所述2D面具渲染特征点将所述2D面具素材与所述第二图像进行匹配融合处理以得到2D面具渲染图像。
- 根据权利要求17所述的控制器,其特征在于,所述交互包括对所述第二图像进行3D卡通形象渲染处理,所述智能镜子包括3D引擎、通用3D人脸模型和3D卡通形象素材库;所述第一处理单元还用于处理所述第二图像以获取所述当前用户的3D卡通形象渲染人脸区域;所述第二处理单元还用于处理所述3D卡通形象渲染人脸区域以获取所述3D卡通形象渲染人脸区域的3D卡通形象渲染特征点;所述交互装置包括:获取单元,所述获取单元用于根据所述通用3D人脸模型和所述3D卡通形象渲染特征点获取所述当前用户的第一姿态参数;第二确认单元,所述第二确认单元用于根据所述当前用户的输入确定3D卡通形象素材;和第四处理单元,所述第四处理单元用于控制所述3D引擎根据所述第一姿态参数和所述3D卡通形象素材对所述第二图像进行3D卡通形象渲染处理。
- 根据权利要求17所述的控制器,其特征在于,所述交互包括对所述第二图像进行虚拟眼镜渲染处理,所述智能镜子包括3D引擎、通用3D人脸模型和虚拟眼镜素材库;所述第一处理单元还用于处理所述第二图像以获取所述当前用户的虚拟眼镜渲染第二人脸区域;所述第二处理单元还用于处理所述虚拟眼镜渲染人脸区域以获取所述虚拟眼镜渲染人脸区域的虚拟眼镜渲染特征点;所述交互装置包括:获取单元,所述获取单元用于根据所述通用3D人脸模型和所述虚拟眼镜渲染特征点获取所述当前用户的第二姿态参数;第二确认单元,所述第二确认单元用于根据所述当前用户的输入确定虚拟眼镜素材;和第四处理单元,所述第四处理单元用于控制所述3D引擎根据所述第二姿态参数和所述虚拟眼镜素材对所述第二图像进行虚拟眼镜渲染处理。
- 根据权利要求17所述的控制器,其特征在于,所述交互包括对所述第二图像进行虚拟发型渲染处理,所述智能镜子包括3D引擎、通用3D人脸模型和虚拟发型素材库;所述第一处理单元还用于处理所述第二图像以获取所述当前用户的虚拟发型渲染人脸区域;所述第二处理单元还用于处理所述虚拟发型渲染人脸区域以获取所述虚拟发型渲染人脸区域的虚拟发型渲染特征点;所述交互装置包括:获取单元,所述获取单元用于根据所述通用3D人脸模型和所述虚拟发型渲染特征点获取所述当前用户的第三姿态参数;第二确认单元,所述第二确认单元用于根据所述当前用户的输入确定虚拟发型素材;和第四处理单元,所述第四处理单元用于控制所述3D引擎根据所述第三姿态参数和所述虚拟发型素材对所述第二图像进行虚拟发型渲染处理。
- 根据权利要求15所述的控制器,其特征在于,所述交互包括对所述当前用户提供日常生活护理指导,所述交互装置包括:指导单元,所述指导单元用于根据所述用户的输入为所述当前用户提供日常生活护理指导。
- 根据权利要求15所述的控制器,其特征在于,所述控制装置还用于:控制所述摄像头拍摄所述注册用户;所述控制器还包括建立装置,所述建立装置用于根据所述注册用户的输入建立所述注册用户的个人记录档案。
- 一种智能镜子,其特征在于,所述智能镜子包括:摄像头;和权利要求15至28任意一项所述的控制器,所述控制器与所述摄像头电连接。
- 一种智能镜子,其特征在于,所述智能镜子包括:一个或多个处理器;存储器;以及一个或多个程序,其中所述一个或多个程序被存储在所述存储器中,并且被配置由所述一个或多个处理器执行,所述程序包括用于执行权利要求1至14中任意一项所述的控制方法的指令。
- 一种计算机可读存储介质,包括与能够显示画面的电子装置结合使用的计算机程序,所述计算机程序可被处理器执行以完成权利要求1至14中任意一项所述的控制方法。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018566586A JP2019537758A (ja) | 2017-06-12 | 2017-06-12 | 制御方法、コントローラ、スマートミラー及びコンピュータ読み取り可能な記憶媒体 |
KR1020197003099A KR20190022856A (ko) | 2017-06-12 | 2017-06-12 | 제어 방법, 제어기, 스마트 거울 및 컴퓨터 판독가능 저장매체 |
CN201780001849.2A CN107820591A (zh) | 2017-06-12 | 2017-06-12 | 控制方法、控制器、智能镜子和计算机可读存储介质 |
EP17913946.4A EP3462284A4 (en) | 2017-06-12 | 2017-06-12 | CONTROL METHOD, CONTROL DEVICE, INTELLIGENT MIRROR AND COMPUTER READABLE STORAGE MEDIUM |
PCT/CN2017/087979 WO2018227349A1 (zh) | 2017-06-12 | 2017-06-12 | 控制方法、控制器、智能镜子和计算机可读存储介质 |
US16/234,174 US20190130652A1 (en) | 2017-06-12 | 2018-12-27 | Control method, controller, smart mirror, and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2017/087979 WO2018227349A1 (zh) | 2017-06-12 | 2017-06-12 | 控制方法、控制器、智能镜子和计算机可读存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/234,174 Continuation US20190130652A1 (en) | 2017-06-12 | 2018-12-27 | Control method, controller, smart mirror, and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018227349A1 true WO2018227349A1 (zh) | 2018-12-20 |
Family
ID=61606897
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/087979 WO2018227349A1 (zh) | 2017-06-12 | 2017-06-12 | 控制方法、控制器、智能镜子和计算机可读存储介质 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20190130652A1 (zh) |
EP (1) | EP3462284A4 (zh) |
JP (1) | JP2019537758A (zh) |
KR (1) | KR20190022856A (zh) |
CN (1) | CN107820591A (zh) |
WO (1) | WO2018227349A1 (zh) |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108305317B (zh) * | 2017-08-04 | 2020-03-17 | 腾讯科技(深圳)有限公司 | 一种图像处理方法、装置及存储介质 |
KR101972331B1 (ko) * | 2017-08-29 | 2019-04-25 | 키튼플래닛 주식회사 | 영상 얼라인먼트 방법 및 그 장치 |
CN108937407A (zh) * | 2018-05-25 | 2018-12-07 | 深圳市赛亿科技开发有限公司 | 一种智能镜子化妆指导方法及系统 |
CN109034063A (zh) * | 2018-07-27 | 2018-12-18 | 北京微播视界科技有限公司 | 人脸特效的多人脸跟踪方法、装置和电子设备 |
CN109597480A (zh) * | 2018-11-06 | 2019-04-09 | 北京奇虎科技有限公司 | 人机交互方法、装置、电子设备及计算机可读存储介质 |
CN109671142B (zh) * | 2018-11-23 | 2023-08-04 | 南京图玩智能科技有限公司 | 一种智能美妆方法及智能美妆镜 |
CN109543646A (zh) * | 2018-11-30 | 2019-03-29 | 深圳市脸萌科技有限公司 | 人脸图像处理方法、装置、电子设备及计算机存储介质 |
CN109875227B (zh) * | 2019-01-22 | 2024-06-04 | 杭州小肤科技有限公司 | 多功能安全智能梳妆镜 |
US20200342987A1 (en) * | 2019-04-26 | 2020-10-29 | doc.ai, Inc. | System and Method for Information Exchange With a Mirror |
CN110941333A (zh) * | 2019-11-12 | 2020-03-31 | 北京字节跳动网络技术有限公司 | 基于眼部动作的交互方法、装置、介质和电子设备 |
FI20207082A (fi) * | 2020-05-11 | 2021-11-12 | Dentview Oy | Laite suuhygienian neuvontaan |
CN111768479B (zh) * | 2020-07-29 | 2021-05-28 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、计算机设备以及存储介质 |
CN111882673A (zh) * | 2020-07-29 | 2020-11-03 | 北京小米移动软件有限公司 | 一种显示控制方法、装置、包含显示屏的镜子及存储介质 |
CN114187649A (zh) * | 2020-08-24 | 2022-03-15 | 华为技术有限公司 | 护肤辅助方法、设备及存储介质 |
KR20220028529A (ko) * | 2020-08-28 | 2022-03-08 | 엘지전자 주식회사 | 스마트 미러 장치 및 방법과, 이의 시스템 |
CN112099712B (zh) * | 2020-09-17 | 2022-06-07 | 北京字节跳动网络技术有限公司 | 人脸图像显示方法、装置、电子设备及存储介质 |
JP7414707B2 (ja) * | 2020-12-18 | 2024-01-16 | トヨタ自動車株式会社 | 画像表示システム |
US11430281B1 (en) * | 2021-04-05 | 2022-08-30 | International Business Machines Corporation | Detecting contamination propagation |
CN113240799B (zh) * | 2021-05-31 | 2022-12-23 | 上海速诚义齿有限公司 | 一种基于医疗大数据的牙齿三维模型构建系统 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105095917A (zh) * | 2015-08-31 | 2015-11-25 | 小米科技有限责任公司 | 图像处理方法、装置及终端 |
CN105426730A (zh) * | 2015-12-28 | 2016-03-23 | 小米科技有限责任公司 | 登录验证处理方法、装置及终端设备 |
CN105956576A (zh) * | 2016-05-18 | 2016-09-21 | 广东欧珀移动通信有限公司 | 一种图像美颜方法、装置及移动终端 |
CN106161962A (zh) * | 2016-08-29 | 2016-11-23 | 广东欧珀移动通信有限公司 | 一种图像处理方法及终端 |
US20160357578A1 (en) * | 2015-06-03 | 2016-12-08 | Samsung Electronics Co., Ltd. | Method and device for providing makeup mirror |
CN106412458A (zh) * | 2015-07-31 | 2017-02-15 | 中兴通讯股份有限公司 | 一种图像处理方法和装置 |
CN107340856A (zh) * | 2017-06-12 | 2017-11-10 | 美的集团股份有限公司 | 控制方法、控制器、智能镜子和计算机可读存储介质 |
Family Cites Families (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7016824B2 (en) * | 2001-02-06 | 2006-03-21 | Geometrix, Inc. | Interactive try-on platform for eyeglasses |
JP2004234571A (ja) * | 2003-01-31 | 2004-08-19 | Sony Corp | 画像処理装置、画像処理方法及び撮影装置 |
JP4645411B2 (ja) * | 2005-10-28 | 2011-03-09 | コニカミノルタホールディングス株式会社 | 認証システム、登録システム及びプログラム |
JP2009064423A (ja) * | 2007-08-10 | 2009-03-26 | Shiseido Co Ltd | メイクアップシミュレーションシステム、メイクアップシミュレーション装置、メイクアップシミュレーション方法およびメイクアップシミュレーションプログラム |
US20090231356A1 (en) * | 2008-03-17 | 2009-09-17 | Photometria, Inc. | Graphical user interface for selection of options from option groups and methods relating to same |
US10872535B2 (en) * | 2009-07-24 | 2020-12-22 | Tutor Group Limited | Facilitating facial recognition, augmented reality, and virtual reality in online teaching groups |
US20110304629A1 (en) * | 2010-06-09 | 2011-12-15 | Microsoft Corporation | Real-time animation of facial expressions |
US9330483B2 (en) * | 2011-04-11 | 2016-05-03 | Intel Corporation | Avatar facial expression techniques |
US20130145272A1 (en) * | 2011-11-18 | 2013-06-06 | The New York Times Company | System and method for providing an interactive data-bearing mirror interface |
CN109288333B (zh) * | 2012-12-18 | 2021-11-30 | 艾斯适配有限公司 | 捕获和显示外观的装置、系统和方法 |
JP6389888B2 (ja) * | 2013-08-04 | 2018-09-12 | アイズマッチ エルティーディー.EyesMatch Ltd. | 鏡における仮想化の装置、システム、及び方法 |
CN108537628B (zh) * | 2013-08-22 | 2022-02-01 | 贝斯普客公司 | 用于创造定制产品的方法和系统 |
CN104598445B (zh) * | 2013-11-01 | 2019-05-10 | 腾讯科技(深圳)有限公司 | 自动问答系统和方法 |
CN105744854B (zh) * | 2013-11-06 | 2020-07-03 | 皇家飞利浦有限公司 | 用于在剃刮过程中引导用户的系统和方法 |
JP2015111372A (ja) * | 2013-12-06 | 2015-06-18 | 株式会社日立システムズ | ヘアスタイル決定支援システムとヘアスタイル決定支援装置 |
JP6375755B2 (ja) * | 2014-07-10 | 2018-08-22 | フリュー株式会社 | 写真シール作成装置および表示方法 |
US9240077B1 (en) * | 2014-03-19 | 2016-01-19 | A9.Com, Inc. | Real-time visual effects for a live camera view |
JP6320143B2 (ja) * | 2014-04-15 | 2018-05-09 | 株式会社東芝 | 健康情報サービスシステム |
US9760935B2 (en) * | 2014-05-20 | 2017-09-12 | Modiface Inc. | Method, system and computer program product for generating recommendations for products and treatments |
US9881303B2 (en) * | 2014-06-05 | 2018-01-30 | Paypal, Inc. | Systems and methods for implementing automatic payer authentication |
EP3198561A4 (en) * | 2014-09-24 | 2018-04-18 | Intel Corporation | Facial gesture driven animation communication system |
CN105512599A (zh) * | 2014-09-26 | 2016-04-20 | 数伦计算机技术(上海)有限公司 | 人脸识别方法及人脸识别系统 |
CN104223858B (zh) * | 2014-09-28 | 2016-04-13 | 广州视睿电子科技有限公司 | 一种自识别智能镜子 |
CN104834849B (zh) * | 2015-04-14 | 2018-09-18 | 北京远鉴科技有限公司 | 基于声纹识别和人脸识别的双因素身份认证方法及系统 |
KR101613038B1 (ko) * | 2015-06-01 | 2016-04-15 | 김형민 | 맞춤형 광고를 표출하는 스마트 미러 시스템 |
US11741639B2 (en) * | 2016-03-02 | 2023-08-29 | Holition Limited | Locating and augmenting object features in images |
US20180137663A1 (en) * | 2016-11-11 | 2018-05-17 | Joshua Rodriguez | System and method of augmenting images of a user |
CN106682578B (zh) * | 2016-11-21 | 2020-05-05 | 北京交通大学 | 基于眨眼检测的弱光人脸识别方法 |
CN106773852A (zh) * | 2016-12-19 | 2017-05-31 | 北京小米移动软件有限公司 | 智能镜子及其工作控制方法、装置 |
-
2017
- 2017-06-12 KR KR1020197003099A patent/KR20190022856A/ko not_active Application Discontinuation
- 2017-06-12 CN CN201780001849.2A patent/CN107820591A/zh active Pending
- 2017-06-12 JP JP2018566586A patent/JP2019537758A/ja active Pending
- 2017-06-12 WO PCT/CN2017/087979 patent/WO2018227349A1/zh unknown
- 2017-06-12 EP EP17913946.4A patent/EP3462284A4/en not_active Withdrawn
-
2018
- 2018-12-27 US US16/234,174 patent/US20190130652A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160357578A1 (en) * | 2015-06-03 | 2016-12-08 | Samsung Electronics Co., Ltd. | Method and device for providing makeup mirror |
CN106412458A (zh) * | 2015-07-31 | 2017-02-15 | 中兴通讯股份有限公司 | 一种图像处理方法和装置 |
CN105095917A (zh) * | 2015-08-31 | 2015-11-25 | 小米科技有限责任公司 | 图像处理方法、装置及终端 |
CN105426730A (zh) * | 2015-12-28 | 2016-03-23 | 小米科技有限责任公司 | 登录验证处理方法、装置及终端设备 |
CN105956576A (zh) * | 2016-05-18 | 2016-09-21 | 广东欧珀移动通信有限公司 | 一种图像美颜方法、装置及移动终端 |
CN106161962A (zh) * | 2016-08-29 | 2016-11-23 | 广东欧珀移动通信有限公司 | 一种图像处理方法及终端 |
CN107340856A (zh) * | 2017-06-12 | 2017-11-10 | 美的集团股份有限公司 | 控制方法、控制器、智能镜子和计算机可读存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3462284A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP3462284A1 (en) | 2019-04-03 |
KR20190022856A (ko) | 2019-03-06 |
CN107820591A (zh) | 2018-03-20 |
JP2019537758A (ja) | 2019-12-26 |
EP3462284A4 (en) | 2019-07-17 |
US20190130652A1 (en) | 2019-05-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018227349A1 (zh) | 控制方法、控制器、智能镜子和计算机可读存储介质 | |
KR102241153B1 (ko) | 2차원 이미지로부터 3차원 아바타를 생성하는 방법, 장치 및 시스템 | |
US9734628B2 (en) | Techniques for processing reconstructed three-dimensional image data | |
WO2021147920A1 (zh) | 一种妆容处理方法、装置、电子设备及存储介质 | |
US20160134840A1 (en) | Avatar-Mediated Telepresence Systems with Enhanced Filtering | |
US20110304629A1 (en) | Real-time animation of facial expressions | |
US9202312B1 (en) | Hair simulation method | |
JP2005038375A (ja) | 目の形態分類方法及び形態分類マップ並びに目の化粧方法 | |
CN111968248A (zh) | 基于虚拟形象的智能化妆方法、装置、电子设备及存储介质 | |
CN108932654A (zh) | 一种虚拟试妆指导方法及装置 | |
CN105069180A (zh) | 一种发型设计方法及系统 | |
CN110866139A (zh) | 一种化妆处理方法、装置及设备 | |
CN116744820A (zh) | 数字彩妆师 | |
WO2022257766A1 (zh) | 图像处理方法、装置、设备及介质 | |
KR101719927B1 (ko) | 립 모션을 이용한 실시간 메이크업 미러 시뮬레이션 장치 | |
Danieau et al. | Automatic generation and stylization of 3d facial rigs | |
WO2018094506A1 (en) | Semi-permanent makeup system and method | |
KR20230118191A (ko) | 디지털 메이크업 아티스트 | |
WO2021155666A1 (zh) | 用于生成图像的方法和装置 | |
CN109876457A (zh) | 游戏角色生成方法、装置及存储介质 | |
US20230101374A1 (en) | Augmented reality cosmetic design filters | |
US11908098B1 (en) | Aligning user representations | |
US20240221292A1 (en) | Light normalization in combined 3d user representations | |
Rivera et al. | Development of an automatic expression recognition system based on facial action coding system | |
Wood | Gaze Estimation with Graphics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2018566586 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2017913946 Country of ref document: EP Effective date: 20181227 |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17913946 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20197003099 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |