WO2018227349A1 - 控制方法、控制器、智能镜子和计算机可读存储介质 - Google Patents

控制方法、控制器、智能镜子和计算机可读存储介质 Download PDF

Info

Publication number
WO2018227349A1
WO2018227349A1 PCT/CN2017/087979 CN2017087979W WO2018227349A1 WO 2018227349 A1 WO2018227349 A1 WO 2018227349A1 CN 2017087979 W CN2017087979 W CN 2017087979W WO 2018227349 A1 WO2018227349 A1 WO 2018227349A1
Authority
WO
WIPO (PCT)
Prior art keywords
current user
image
rendering
smart mirror
virtual
Prior art date
Application number
PCT/CN2017/087979
Other languages
English (en)
French (fr)
Inventor
俞大海
全永兵
李建平
周均扬
宋剑锋
Original Assignee
美的集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 美的集团股份有限公司 filed Critical 美的集团股份有限公司
Priority to JP2018566586A priority Critical patent/JP2019537758A/ja
Priority to KR1020197003099A priority patent/KR20190022856A/ko
Priority to CN201780001849.2A priority patent/CN107820591A/zh
Priority to EP17913946.4A priority patent/EP3462284A4/en
Priority to PCT/CN2017/087979 priority patent/WO2018227349A1/zh
Publication of WO2018227349A1 publication Critical patent/WO2018227349A1/zh
Priority to US16/234,174 priority patent/US20190130652A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47GHOUSEHOLD OR TABLE EQUIPMENT
    • A47G1/00Mirrors; Picture frames or the like, e.g. provided with heating, lighting or ventilating means
    • A47G1/02Mirrors used as equipment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to the field of smart mirrors, and more particularly to a control method, a controller, a smart mirror, and a computer readable storage medium.
  • the function of the current smart mirror is mainly information display, such as displaying weather, text messages and other information.
  • information display such as displaying weather, text messages and other information.
  • the current use of smart mirrors is more limited and the user experience is not good.
  • the present invention aims to solve at least one of the technical problems existing in the prior art. To this end, the present invention provides a control method, controller, smart mirror, and computer readable storage medium.
  • the control method of an embodiment of the present invention is for controlling a smart mirror.
  • the smart mirror includes a camera, and the control method includes:
  • a controller of an embodiment of the invention is used to control a smart mirror.
  • the smart mirror includes a camera.
  • the controller includes a control device, a determination device, a login device, and an interaction device.
  • the control device is configured to control the camera to capture a current user;
  • the determining device is configured to determine whether the current user is a registered user;
  • the login device is configured to control the current when the current user is a registered user.
  • the user logs into the smart mirror;
  • the interaction device is configured to control the smart mirror to interact with the current user according to the input of the current user and output interaction information.
  • the smart mirror of an embodiment of the present invention includes a camera and the above-described controller, and the controller is electrically connected to the camera.
  • a smart mirror of an embodiment of the invention includes one or more processors, a memory, and one or more programs.
  • the one or more programs are stored in the memory and configured to be executed by the one or more processors, the program including instructions for executing the control method described above.
  • a computer readable storage medium in accordance with an embodiment of the present invention includes a computer program for use in conjunction with an electronic device capable of displaying a picture, the computer program being executable by a processor to perform the control method described above.
  • control method, the controller, the smart mirror and the computer readable storage medium of the embodiments of the present invention can provide the user with various interactive functions including beauty makeup, cartoon image rendering and the like after the user logs in.
  • various interactive functions including beauty makeup, cartoon image rendering and the like after the user logs in.
  • the use of the smart mirror is further enriched to meet the needs of the user's smart life and enhance the user experience.
  • FIG. 1 is a flow chart of a control method of some embodiments of the present invention.
  • FIG. 2 is a block diagram of a smart mirror in accordance with some embodiments of the present invention.
  • FIG. 3 is a schematic structural view of a smart mirror according to some embodiments of the present invention.
  • FIG. 4 is a flow chart of a control method of some embodiments of the present invention.
  • Figure 5 is a block diagram of a determination device of some embodiments of the present invention.
  • FIG. 6 is a flow chart of a control method of some embodiments of the present invention.
  • FIG. 7 is a block diagram of a controller of some embodiments of the present invention.
  • FIG. 8 is a flow chart of a control method of some embodiments of the present invention.
  • FIG. 9 is a flow chart of a control method of some embodiments of the present invention.
  • FIG. 10 is a block diagram of an interactive device in accordance with some embodiments of the present invention.
  • FIG. 11 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
  • FIG. 12 is a flow chart of a control method of some embodiments of the present invention.
  • Figure 13 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
  • FIG. 14 is a flow diagram of a control method of some embodiments of the present invention.
  • Figure 15 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
  • 16 is a flow chart of a control method of some embodiments of the present invention.
  • Figure 17 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
  • 18 is a flow chart showing a control method of some embodiments of the present invention.
  • FIG. 19 is a block diagram of an interactive device in accordance with some embodiments of the present invention.
  • 20 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
  • 21 is a flow chart showing a control method of some embodiments of the present invention.
  • Figure 22 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
  • 23 is a flow chart showing a control method of some embodiments of the present invention.
  • Figure 24 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
  • 25 is a flow diagram of a control method of some embodiments of the present invention.
  • 26 is a block diagram of an interactive device in accordance with some embodiments of the present invention.
  • Figure 27 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
  • FIG. 28 is a block diagram of a smart mirror in accordance with some embodiments of the present invention.
  • a control method of an embodiment of the present invention is used to control the smart mirror 100.
  • the smart mirror 100 includes a camera 20.
  • the control method includes the following steps:
  • S18 Control the smart mirror 100 to interact with the current user according to the input of the current user and output the interaction information.
  • control method of the embodiment of the present invention may be implemented by the controller 10 of the embodiment of the present invention.
  • the controller 10 of the embodiment of the present invention includes a control device 12, a judging device 14, a login device 16, and an interaction device 18.
  • Step S12 can be implemented by control device 12
  • step S14 can be implemented by decision device 14
  • step S16 can be implemented by login device 16,
  • step S18 can be implemented by interaction device 18.
  • control device 12 is configured to control the camera 20 to capture the current user; the determining device 14 is configured to determine whether the current user is a registered user; and the login device 16 is configured to control the current user to log in to the smart mirror 100 when the current user is a registered user.
  • the interaction device 18 is configured to control the smart mirror 100 to interact with the current user and output the interaction information according to the input of the current user.
  • the controller 10 of the embodiment of the present invention is applied to the smart mirror 100 of the embodiment of the present invention. That is, the smart mirror 100 of the embodiment of the present invention includes the controller 10 of the embodiment of the present invention.
  • the smart mirror 100 of an embodiment of the present invention further includes a camera 20. The camera 20 and the controller 10 are electrically connected.
  • the control method of the embodiment of the present invention can provide the user with various entertainment interaction and guiding functions, such as beauty makeup and cartoon image rendering, after the user successfully logs in.
  • the current user using smart mirror 100 needs to successfully log in to smart mirror 100 in order to exercise the right to use various entertainment interactions and guidance functions for smart mirror 100. That is to say, the entertainment interaction and guidance function of the smart mirror 100 can be used if and only if the current user is a registered user. In this way, the personal data and privacy of the registered user can be protected, and the information security used by the smart mirror 100 can be improved.
  • each registered user can set a different style of use of the smart mirror 100.
  • the smart mirror 100 displays the usage style corresponding to the current registered user, further enhancing the user experience.
  • the smart mirror 100 displays the interactive interface.
  • the interaction with the smart mirror 100 can be achieved by the current user clicking on the content in the interface.
  • control method, the controller 10 and the smart mirror 100 of the embodiments of the present invention can provide various interactive functions for the user after the user successfully logs in.
  • use function of the smart mirror 100 is further enriched to meet the needs of the user's smart life and enhance the user experience.
  • the smart mirror 100 includes a registration library.
  • the registration library includes registration feature information of the registered face area of all registered users.
  • Step S14: determining whether the current user is a registered user includes:
  • S141 Processing the first image of the current user captured by the camera 20 to obtain the face area of the current user to be tested;
  • S142 Processing a face area to be tested to obtain a feature point to be tested of the face area to be tested;
  • the determining device 14 includes a first processing unit 141, a second processing unit 142, a third processing unit 143, a comparison unit 144, and a first confirmation unit 145.
  • Step S141 may be performed by the first processing order
  • the element 141 is implemented, the step S142 can be implemented by the second processing unit 142, the step S143 can be implemented by the third processing unit 143, the step S144 can be implemented by the comparison unit 144, and the step S145 can be implemented by the first confirmation unit 145.
  • the first processing unit 141 is configured to process the first image of the current user captured by the camera 20 to acquire the face area to be tested of the current user; the second processing unit 142 is configured to process the face area to be tested to obtain the to-be-tested face area. Measuring the feature point to be tested in the face region; the third processing unit 143 is configured to process the feature point to be tested to extract feature information of the face region to be tested; and the comparison unit 144 is configured to compare the feature information to be tested with the registration feature information. Pairing to obtain a comparison result; the first confirming unit 145 is configured to confirm that the current user is a registered user when the comparison result is greater than a predetermined threshold.
  • the feature points to be tested include feature points such as an eye, a nose, a mouth, and a facial contour of the face region to be tested.
  • the registration feature information or the feature information to be tested includes feature information of the face of the registered user or the current user, such as the relative position and distance of the eyes, the nose, the mouth, and the like, the viewpoint, the size of the eyes, the nose, the mouth, and the like. Comparing the feature information of the current user with the registration feature information of the registered user. When the comparison result is greater than the predetermined threshold, the matching degree between the current user and the registered user's face is high, so that the current user is the registered user. . After confirming that the current user is a registered user, the current user successfully logs in to the smart mirror 100.
  • the smart mirror 100 provides rich usage functions only for registered users, and ensures the information security of registered users.
  • the registered user can set the usage style of the smart mirror 100, such as the color of the display interface, the background pattern, and the like. In this way, after the current user successfully logs in to the smart mirror 100, the smart mirror 100 can display the usage style that the current user likes, further enhancing the user experience.
  • the login verification of the current user is verified by face recognition.
  • the current user's login verification can also be verified by voice recognition, fingerprint recognition, iris recognition, and the like.
  • control method of the embodiment of the present invention further includes:
  • S112 Establish a personal record file of the registered user according to the input of the registered user.
  • the controller 10 further includes an establishing device 11.
  • Step S111 can be implemented by the control device 12, which can be implemented by the establishing device 11.
  • control device 12 is further configured to control the camera 20 to capture a registered user; the establishing device 12 is configured to establish a personal record file of the registered user according to the input of the registered user.
  • the smart mirror 100 processes the image of the registered registered user to acquire the registered feature point of the registered user, and stores the registered feature point in the registration library for subsequent identification and matching login.
  • Registered users can make edit inputs on the smart mirror 100 to create their own personal record files.
  • the personal record file includes the nickname, avatar, and personal signature of the registered user. Registered users can also create their own cartoon characters and store them in personal record files.
  • the smart mirror 100 displays all the information in the current user's personal record file or displays part of the information in the current user's personal record file.
  • the current user may select to save the output interactive information.
  • the saved interactive information is also stored in the personal record file. Users can view their saved interactive information and/or historical interactive content through personal record files. In this way, the user experience can be further improved.
  • control method of the embodiment of the present invention includes:
  • S172 Control the smart mirror 100 to display the second image.
  • step S171 and step S172 can be implemented by control device 12.
  • control device 12 is also used to:
  • the smart mirror 100 is controlled to display a second image.
  • the first image captured by the camera 20 is used for login verification of face recognition.
  • the second image taken by the camera 20 is used for the interaction of the current user with the smart mirror 100.
  • the interacting includes performing a coloring process on the second image.
  • the smart mirror includes a library of cute materials.
  • Step S18: controlling the smart mirror 100 to interact with the current user according to the input of the current user and outputting the interaction information includes:
  • S1812 processing the face of the face of the face to obtain the feature points of the face of the face;
  • S1814 Perform matching fusion processing on the blank material and the second image according to the feature point to obtain a cute image.
  • the interaction device 18 includes a second validation unit 181 and a fourth processing unit 182.
  • Step S1811 may be implemented by the first processing unit 141
  • step S1812 may be implemented by the second processing unit 142
  • step S1813 may be implemented by the second validation unit 181
  • step S1814 may be implemented by the fourth processing unit 182.
  • the first processing unit 141 is further configured to process the second image to obtain the facial face region of the current user; the second processing unit 142 is further configured to process the facial face region to obtain the facial face region.
  • the second confirmation unit 181 is configured to determine the hair color material according to the current user input, and the fourth processing unit 182 is configured to perform matching fusion processing on the hair color material and the second image according to the coloring feature point to obtain the hair color image. .
  • the feature points include characteristic points such as eyes, nose, mouth, ears, and hair.
  • the germination process is to superimpose the effect of the garnish on the face of the current user's face, such as the superimposed cute expression on the face, the superimposed hairpin on the face, and the superimposition of the head.
  • the animal's ears and nose are superimposed on the animal's nose, and the cheeks are superimposed on the animal's beard.
  • Mengyan material can be specified by the user.
  • the smart mirror 100 forms an interesting animation effect by adopting a dynamic frame-by-frame display of the image.
  • Step S18: controlling the smart mirror 100 to interact with the current user according to the input of the current user and outputting the interaction information includes:
  • S1823 Perform a beauty treatment on the second image according to the current user input and the beauty feature point to obtain a beauty image.
  • the interaction device 18 includes a fourth processing unit 182.
  • Step S1821 may be implemented by the first processing unit 141
  • step S1822 may be implemented by the second processing unit 142
  • step S183 may be implemented by the fourth processing unit 182.
  • the first processing unit 141 is further configured to process the second image to obtain the facial face region of the current user; the second processing unit 142 is further configured to process the facial face region to obtain the facial face region.
  • the beauty feature point is used; the fourth processing unit 182 is configured to perform a beauty treatment on the second image according to the current user input and the beauty feature point to obtain the beauty image.
  • the cosmetic treatment includes one or more of a whitening filter, a rosy filter, a face-lifting module, and a large-eye module. Beauty features Points include faces, eyes, and the like.
  • the user can perform the beauty treatment of the second image by clicking the operation option of the beauty treatment. For example, as shown in FIG. 13, after the user clicks on the whitening filter, the fourth processing unit 182 performs facial whitening processing on the facial face region in the second image. In this way, the user can select the beauty function to perform the beauty treatment on the second image, and the processed beauty image is displayed on the smart mirror 100. The user can view the personal image displayed in the smart mirror 100 and enhance the user's viewing experience.
  • the smart mirror 20 displays the beauty image frame by frame in the form of a dynamic frame. That is to say, the camera 20 will capture the current user in real time to obtain the current feature points of the current user, and perform beauty treatment on the images obtained in real time. In this way, even if the current user is in a motion state, such as a certain angle of the head rotation, the smart mirror 100 displays the image of the current user's beauty in real time.
  • the interacting includes performing a virtual makeup test on the second image.
  • Smart mirrors include a library of makeup materials.
  • Step S18: controlling the smart mirror 100 to interact with the current user according to the input of the current user and outputting the interaction information includes:
  • the interaction device 18 includes a second validation unit 181 and a fourth processing unit 182.
  • Step S1831 may be implemented by the first processing unit 141
  • step S1832 may be implemented by the second processing unit 142
  • step S1833 may be implemented by the second validation unit 181
  • step S1834 may be implemented by the fourth processing unit 184.
  • the first processing unit 141 is further configured to process the second image to obtain the current user's makeup face area; the second processing unit 142 is further configured to process the makeup face area to obtain the makeup face area.
  • a makeup test feature the second confirmation unit 181 is configured to determine a makeup material according to the input of the current user; and the fourth processing unit 182 is configured to perform a matching fusion process on the makeup material and the second image according to the makeup feature point to obtain a virtual makeup image.
  • Make-up materials include one or more of eye shadow material, eyeliner material, blush material, lip gloss material and eyebrow material.
  • the makeup features include feature points such as eyes, nose, eyebrows, and cheeks.
  • the fourth processing unit 182 performs matching and fusion processing on the makeup material selected by the current user and the second image according to the determined makeup feature point.
  • the smart mirror 100 displays a virtual makeup image that matches the blending process. For example, as shown in FIG. 15, after the current user clicks on the eye shadow material and the lip gloss material, the smart mirror 100 displays the processed virtual makeup image.
  • the current user can refer to the virtual makeup image displayed in the smart mirror 100 to determine the makeup that he or she likes when making makeup. In this way, the interaction between the user and the smart mirror 100 is enhanced, and the user experience is improved.
  • the smart mirror 100 displays the virtual makeup image frame by frame in the form of a dynamic frame. Even if the current user is in motion, the smart mirror 100 can continue to display the virtual makeup image after the virtual makeup treatment.
  • the interacting includes performing a 2D mask rendering process on the second image.
  • the smart mirror 100 includes a 2D mask material library.
  • Step S18: controlling the smart mirror 100 to interact with the current user according to the input of the current user and outputting the interaction information includes:
  • S1842 processing the 2D mask to render the face region to obtain a 2D mask rendering feature point of the 2D mask rendering face region;
  • S1844 Perform matching fusion processing on the 2D mask material and the second image according to the 2D mask rendering feature point to obtain a 2D mask rendering image.
  • the interaction device 18 includes a second validation unit 181 and a fourth processing unit 182.
  • Step S1841 may be implemented by the first processing unit 141
  • step S1842 may be implemented by the second processing unit 142
  • step S1843 may be implemented by the second validation unit 181
  • step S1844 may be implemented by the fourth processing unit 182.
  • the first processing unit 141 is further configured to process the second image to obtain the 2D mask rendering face area of the current user; the second processing unit 142 is further configured to process the 2D mask rendering face area to obtain the 2D mask rendering person.
  • the 2D mask of the face region renders the feature point;
  • the second confirming unit 181 is configured to determine the 2D mask material according to the input of the current user;
  • the fourth confirming unit 182 is configured to match the 2D mask material with the second image according to the 2D mask rendering feature point. Process to get a 2D mask to render the image.
  • the 2D mask rendering feature points mainly include an eye, a nose, and a mouth.
  • 2D mask material includes classic white mask, Peking Opera mask, animal face, cartoon image mask and so on.
  • the fourth processing unit 182 performs a matching fusion process on the 2D mask material and the second image.
  • the smart mirror 100 will display a 2D mask rendered image that matches the blending process. As shown in FIG. 17, the user clicks on the 2D mask material of the white mask, and the smart mirror 100 displays the processed 2D mask rendered image. In this way, the user can intuitively feel the effect of wearing a mask. Increase the fun of the use of the smart mirror 100.
  • the smart mirror 100 displays the 2D mask rendered image frame by frame in the form of a dynamic frame.
  • the 2D mask can still match the 2D mask rendering face area. Users can dynamically view the rendering effects.
  • the smart mirror 100 can provide the user with a feeling of using the mask in the mirror.
  • the interacting includes performing a 3D cartoon rendering process on the second image.
  • the smart mirror 100 includes a 3D engine, a general 3D face model, and a 3D cartoon material library.
  • Step S18: controlling the smart mirror 100 to interact with the current user according to the input of the current user and outputting the interaction information includes:
  • S1851 processing the second image to obtain a 3D cartoon image rendering face area of the current user
  • S1852 processing a 3D cartoon image to render a face region to obtain a 3D cartoon image rendering feature point of a 3D cartoon image rendering face region;
  • S1853 Acquire a 3D cartoon image rendering posture parameter of the current user according to the general 3D face model and the 3D cartoon image rendering feature point;
  • control 3D engine performs a 3D cartoon image rendering process on the second image according to the first posture parameter and the 3D cartoon image material.
  • the interaction device 18 includes an acquisition unit 183, a second confirmation unit 181, and a fourth processing unit 182.
  • Step S1851 may be implemented by the first processing unit 141
  • step S1852 may be implemented by the second processing unit 142
  • step S1853 may be implemented by the obtaining unit 183
  • step S1854 may be implemented by the second confirming unit 181
  • step S1855 may be implemented by the fourth processing unit. 182 implementation.
  • the first obtaining unit 141 is further configured to process the second image to obtain a 3D cartoon image of the current user.
  • the second processing unit 142 is further configured to process the 3D cartoon image to render the face region to obtain a 3D cartoon image rendering feature point of the 3D cartoon image rendering face region;
  • the obtaining unit 183 is configured to use the 3D face model and the 3D face model
  • the cartoon image rendering feature point acquires the current user's 3D cartoon image rendering gesture parameter;
  • the second confirmation unit 181 is configured to determine the 3D cartoon image material according to the current user's input;
  • the fourth processing unit 182 is configured to control the 3D engine according to the first posture parameter and
  • the 3D cartoon image material performs a 3D cartoon image rendering process on the second image.
  • performing 3D cartoon image rendering processing on the second image refers to acquiring an action of a character in the second image, and controlling the 3D cartoon image to imitate the motion of the character.
  • the 3D cartoon image material library includes a variety of 3D cartoon image materials, such as SpongeBob SquarePants, Jingle Cats, Kung Fu Pandas, and Winnie the Pooh.
  • the 3D cartoon image rendering feature points include a 3D cartoon image to render the eyes, nose, mouth, head, etc. of the face area.
  • the first posture parameter includes a deflection angle of the head, a closure of the eye, an action of the mouth, and the like.
  • the matching of the general 3D face model and the 3D cartoon image rendering feature point is used to convert the 2D planar image captured by the camera 20 into a 3D stereoscopic attitude parameter, that is, a first posture parameter.
  • the 3D engine can perform the 3D cartoon image rendering process on the second image according to the first posture parameter and the 3D cartoon material, so as to realize the 3D cartoon image according to the current user's head and face motion.
  • 3D stereoscopic effect of character action As shown in FIG. 20, the 3D cartoon material selected by the current user is a jingle cat. When the user makes a big eye and makes a big laugh, the cat simultaneously performs a big eye and laughs at the same time to realize real-time imitation following. As such, the user interaction with the smart mirror 100 is greatly enhanced.
  • the obtained 3D cartoon image rendering feature points may also be matched with a general 3D face model in the general 3D face model library to obtain 3D stereoscopic gesture parameters.
  • the general 3D face model library stores common 3D faces of different shapes. In this way, different universal 3D face models can be selected according to the differences of different users' heads, faces, facial features, etc., thereby improving the accuracy of 3D stereoscopic gesture parameters to further optimize the 3D cartoon image rendering effect, so that 3D The imitation of the cartoon image follows more accurately.
  • the interacting includes performing a virtual glasses rendering process on the second image.
  • the smart mirror 100 includes a 3D engine, a universal 3D face model, and a virtual glasses material library.
  • Step S18: controlling the smart mirror 100 to interact with the current user according to the input of the current user and outputting the interaction information includes:
  • S1861 processing the second image to obtain a virtual face of the current user to render the second face region
  • S1862 Processing the virtual glasses to render the face region to obtain the virtual glasses rendering feature points of the virtual glasses rendering face region;
  • S1863 Acquire a virtual glasses rendering posture parameter of the current user according to the general 3D face model and the virtual glasses rendering feature points;
  • control 3D engine performs virtual glasses rendering processing on the second image according to the second posture parameter and the virtual glasses material.
  • the interaction device 18 includes an acquisition unit 183, a second validation unit 181, and a fourth processing unit 182.
  • Step S1861 may be implemented by the first processing unit 141
  • step S1862 may be implemented by the second processing unit 142
  • step S1863 may be implemented by the obtaining unit 183
  • step S1864 may be implemented by the second confirming unit 181
  • step S1865 may be performed by the fourth processing unit. 182 implementation.
  • the first processing unit 141 is further configured to process the second image to obtain the virtual glasses of the current user to render the second face region; the second processing unit 142 is further configured to process the virtual glasses to render the face region to obtain the virtual glasses.
  • Rendering a virtual glasses rendering feature point of the face region; the obtaining unit 183 is configured to render according to the general 3D face model and the virtual glasses The feature point acquires the virtual glasses rendering pose parameter of the current user; the second confirming unit 181 is configured to determine the virtual glasses material according to the input of the current user; the fourth processing unit 182 is configured to control the 3D engine according to the second posture parameter and the virtual glasses material pair The two images are subjected to virtual glasses rendering processing.
  • performing virtual glasses rendering processing on the second image refers to wearing virtual glasses for the characters in the second image, and the virtual glasses can be moved with the action of the head of the person in the second image to implement imitation following.
  • the virtual glasses material library includes a variety of virtual glasses materials of different colors and shapes.
  • the virtual glasses rendering feature points mainly include virtual glasses rendering the head and eye portions of the face region.
  • the virtual glasses render attitude parameters include the movement of the head and eyes.
  • the matching of the universal 3D face model and the virtual glasses rendering feature points is used to convert the 2D planar image captured by the camera 20 into a 3D stereoscopic attitude parameter, ie, a second attitude parameter.
  • the 3D engine can perform virtual glasses rendering processing on the second image according to the second posture parameter and the virtual glasses material. Then the current user sees the 3D stereoscopic display effect after wearing the glasses in the smart mirror 100.
  • the virtual glasses can also move in real time with the head, thereby achieving an exact match between the virtual glasses and the eye portion.
  • the virtual glasses material selected by the user is thick frame black glasses, and the smart mirror 100 displays an image after the user wears the thick frame black glasses.
  • the thick frame black glasses also accurately match the user's eyes when the user's head is rotated.
  • the user can refer to the style of the virtual glasses rendering process to select the style of the glasses that are suitable for the wearer, and further increase the use function and practicability of the smart mirror 100.
  • the fun of the smart mirror 100 can be increased.
  • the obtained virtual glasses rendering feature points can also be matched with a common 3D face model in the general 3D face model library to obtain 3D stereoscopic pose parameters.
  • the general 3D face model library stores common 3D faces of different shapes. In this way, different universal 3D face models can be selected according to the differences of different users' heads, faces, facial features, etc., thereby improving the accuracy of 3D stereoscopic posture parameters to further optimize the effect of virtual glasses rendering, so that virtual glasses Matching with the user's eyes is more precise.
  • the interaction includes performing a virtual hairstyle rendering process on the second image
  • the smart mirror 100 includes a 3D engine, a general 3D face model, and a virtual hairstyle material library.
  • Step S18: controlling the smart mirror 100 to interact with the current user according to the input of the current user and outputting the interaction information includes:
  • S1872 processing a virtual hairstyle to render a face region to obtain a virtual hairstyle rendering feature point of the virtual hairstyle rendering face region;
  • S1873 Acquire a third posture parameter of the current user according to the 3D face model and the virtual hairstyle rendering feature point;
  • control 3D engine performs a virtual hairstyle rendering process on the second image according to the third posture parameter and the virtual hairstyle material.
  • the interaction device 18 includes an acquisition unit 183, a second validation unit 181, and a fourth processing unit 182.
  • Step S1871 can be implemented by the first processing unit 141
  • step S1872 can be implemented by the second processing unit 142
  • step S1873 can be implemented by the obtaining unit 183
  • step S1874 can be implemented by the second confirming unit 181
  • step S1875 can be implemented by the fourth processing unit 182 implementation.
  • the first processing unit 141 is further configured to process the second image to obtain the virtual hairstyle rendering face area of the current user; the second processing unit 142 is further configured to process the virtual hairstyle rendering face area to obtain the virtual hairstyle rendering person.
  • the virtual hairstyle of the face region is used to render the feature point;
  • the obtaining unit 184 is configured to acquire the third gesture parameter of the current user according to the 3D face model and the virtual hairstyle rendering feature point;
  • the second confirming unit 181 is configured to determine the virtual hairstyle element according to the input of the current user.
  • the fourth processing unit 182 is configured to control the 3D engine to perform a virtual hairstyle rendering process on the second image according to the third posture parameter and the virtual hairstyle material.
  • performing virtual hairstyle rendering processing on the second image refers to performing virtual hairstyle wearing on the characters in the second image, and the virtual hairstyle can be operated in accordance with the motion of the person's head.
  • the virtual hairstyle material library includes a variety of virtual hairstyle materials of different shapes and colors.
  • the virtual hairstyle rendering feature point mainly includes the head portion of the current user.
  • the virtual hairstyle rendering pose parameter includes the motion of the head.
  • the matching of the universal 3D face model and the virtual hairstyle rendering feature points is used to convert the 2D plane image captured by the camera 20 into a 3D stereo attitude parameter, that is, a third posture parameter.
  • the 3D engine can perform the virtual hairstyle rendering process on the second image according to the third posture parameter and the virtual hairstyle material.
  • the current user can then see the 3D stereoscopic effect of the virtual hairstyle being tried in the smart mirror 100.
  • the virtual hairstyle can also move in real time with the head, thereby achieving an exact match between the virtual hairstyle and the head.
  • the virtual hairstyle material selected by the current user is a short hair
  • the smart mirror 100 displays the image after the user wears the short hair. When the user's head is rotated, the short hair can also exactly match the user's head. In this way, the user can refer to the effect of the virtual hairstyle rendering process to select a suitable hairstyle style, and at the same time increase the utility and fun of the smart mirror 100.
  • the obtained virtual hairstyle rendering feature points can also be matched with a general 3D face model in the general 3D face model library to obtain 3D stereo pose parameters.
  • the general 3D face model library stores common 3D faces of different shapes. In this way, different universal 3D face models can be selected according to the differences of different users' heads, faces, facial features, etc., thereby improving the accuracy of the 3D stereoscopic posture parameters to further optimize the virtual hairstyle rendering effect and making the virtual hairstyle Matching with the user's head is more precise.
  • the interaction further includes providing daily life care guidance to the current user.
  • Step S18: controlling the smart mirror 100 to interact with the current user according to the input of the current user and outputting the interaction information includes:
  • S188 Provide daily life care guidance for the current user according to the input of the user.
  • the interaction device 18 further includes a coaching unit 185.
  • Step S188 can be implemented by the coaching unit 185.
  • the guiding unit 185 is configured to provide daily life care guidance for the current user according to the user's input.
  • daily life care guidance includes teaching the user how to properly brush their teeth, wash their face properly, perform facial massage, and the like.
  • the smart mirror 100 displays the daily care guidance content of the brushing in the form of a video or a picture. In this way, the practicality of the smart mirror 100 is increased.
  • the second confirmation unit 181 in the germination process, the virtual makeup test, the 2D mask rendering process, the 3D cartoon image rendering process, the virtual glasses rendering process, and the virtual hairstyle rendering process are the same. That is, the second confirmation unit may perform the contents of step S1813, step S1833, step S1843, step S1854, step S1865, and/or step S1874.
  • the fourth processing unit 182 in the hair coloring process, the beauty process, the virtual makeup test, the 2D mask rendering process, the 3D cartoon image rendering process, the virtual glasses rendering process, and the virtual hairstyle rendering process are the same. That is, the fourth processing unit 182 can perform the contents of step S1814, step S1823, step S1834, step S1844, step S1855, step S1865, and/or step S1875.
  • control method, the controller 10 and the smart mirror 100 of the embodiments of the present invention can perform the coloring process, the virtual makeup test, the 2D mask rendering process, the 3D cartoon image rendering process, the virtual glasses rendering process, or the virtual hairstyle rendering process simultaneously or sequentially.
  • the controller 10 can perform the coloring process, the beauty process, and the 3D cartoon image rendering process on the second image at the same time.
  • the controller 10 can also perform the beauty treatment, the virtual makeup test, and the virtual glasses on the second image.
  • the rendering process and the virtual hairstyle rendering process are sequential image processing. In some embodiments, the processing order of each image processing mode can be changed at will.
  • a smart mirror 100 of an embodiment of the present invention includes one or more processors 30, a memory 40, and one or more programs 41.
  • one or more programs 41 are stored in the memory 40 and configured to be executed by one or more processors 30.
  • the program 41 includes instructions for executing the control method of any of the above embodiments.
  • program 41 includes instructions for performing the following steps:
  • S18 Control the smart mirror 100 to interact with the current user according to the input of the current user and output the interaction information.
  • a computer readable storage medium in accordance with an embodiment of the present invention includes a computer program for use in conjunction with an electronic device capable of displaying a picture.
  • the computer program can be executed by the processor to perform the control methods described in any of the above described real-time modes.
  • a processor can be used to perform the following steps:
  • S18 Control the smart mirror 100 to interact with the current user according to the input of the current user and output the interaction information.
  • control method, the controller 10, the smart mirror 100, and the computer readable storage medium of the embodiments of the present invention can provide the registered user with the appearance of the face processing, the beauty treatment, the virtual makeup test, the 2D mask rendering process, and the 3D.
  • the use function of the smart mirror 100 is increased, the utility is higher, and the fun of the smart mirror 100 and the user experience are also improved.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • An arranging list of executable instructions of logical functions may be embodied in any computer readable medium for use in an instruction execution system, apparatus, or device (eg, a computer-based system, a system including a processor, or other A system, device, or device that takes instructions and executes instructions for use, or in conjunction with such instructions to execute a system, apparatus, or device.
  • a "computer-readable medium" can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Computer Graphics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Processing (AREA)

Abstract

一种控制方法,用于控制智能镜子(100)。智能镜子(100)包括摄像头(20)。控制方法包括:控制摄像头(20)拍摄当前用户(S12);判断当前用户是否为注册用户(S14);在当前用户为注册用户时,控制当前用户登录智能镜子(100)(S16);根据当前用户的输入控制智能镜子(100)与当前用户产生交互并输出交互信息(S18)。还公开了一种控制器(10)、智能镜子(100)和计算机可读存储介质。

Description

控制方法、控制器、智能镜子和计算机可读存储介质 技术领域
本发明涉及智能镜子领域,特别涉及一种控制方法、控制器、智能镜子和计算机可读存储介质。
背景技术
当前智能镜子的功能主要是信息显示,如显示天气、短信等信息等。但是目前的智能镜子的使用功能较为局限,用户体验不佳。
发明内容
本发明旨在至少解决现有技术中存在的技术问题之一。为此,本发明提供一种控制方法、控制器、智能镜子和计算机可读存储介质。
本发明实施方式的控制方法用于控制智能镜子。所述智能镜子包括摄像头,所述控制方法包括:
控制所述摄像头拍摄当前用户;
判断所述当前用户是否为注册用户;
在所述当前用户为注册用户时,控制所述当前用户登录所述智能镜子;和
根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息。
本发明实施方式的控制器用于控制智能镜子。所述智能镜子包括摄像头。所述控制器包括控制装置、判断装置、登录装置和交互装置。所述控制装置用于控制所述摄像头拍摄当前用户;所述判断装置用于判断所述当前用户是否为注册用户;所述登录装置用于在所述当前用户为注册用户时,控制所述当前用户登录所述智能镜子;所述交互装置用于根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息。
本发明实施方式的智能镜子包括摄像头和上述的控制器,所述控制器与所述摄像头电连接。
本发明实施方式的智能镜子包括一个或多个处理器、存储器以及一个或多个程序。其中所述一个或多个程序被存储在所述存储器中,并且被配置由所述一个或多个处理器执行,所述程序包括用于执行上述的控制方法的指令。
本发明实施方式的计算机可读存储介质包括与能够显示画面的电子装置结合使用的计算机程序,所述计算机程序可被处理器执行以完成上述的控制方法。
本发明实施方式的控制方法、控制器、智能镜子和计算机可读存储介质在用户登录后可以为用户提供包括美妆美颜、卡通形象渲染等在内的多种互动功能。如此,进一步丰富智能镜子的使用功能,满足用户的智能生活的需求,提升用户的使用体验。
本发明的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。
附图说明
本发明的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本发明某些实施方式的控制方法的流程示意图。
图2是本发明某些实施方式的智能镜子的模块示意图。
图3是本发明某些实施方式的智能镜子的结构示意图。
图4是本发明某些实施方式的控制方法的流程示意图。
图5是本发明某些实施方式的判断装置的模块示意图。
图6是本发明某些实施方式的控制方法的流程示意图。
图7是本发明某些实施方式的控制器的模块示意图。
图8是本发明某些实施方式的控制方法的流程示意图。
图9是本发明某些实施方式的控制方法的流程示意图。
图10是本发明某些实施方式的交互装置的模块示意图。
图11是本发明本发明某些实施方式的控制方法的状态示意图。
图12是本发明某些实施方式的控制方法的流程示意图。
图13是本发明本发明某些实施方式的控制方法的状态示意图。
图14是本发明某些实施方式的控制方法的流程示意图。
图15是本发明本发明某些实施方式的控制方法的状态示意图。
图16是本发明某些实施方式的控制方法的流程示意图。
图17是本发明本发明某些实施方式的控制方法的状态示意图。
图18是本发明某些实施方式的控制方法的流程示意图。
图19是本发明某些实施方式的交互装置的模块示意图。
图20是本发明本发明某些实施方式的控制方法的状态示意图。
图21是本发明某些实施方式的控制方法的流程示意图。
图22是本发明本发明某些实施方式的控制方法的状态示意图。
图23是本发明某些实施方式的控制方法的流程示意图。
图24是本发明本发明某些实施方式的控制方法的状态示意图。
图25是本发明某些实施方式的控制方法的流程示意图。
图26是本发明某些实施方式的交互装置的模块示意图。
图27是本发明本发明某些实施方式的控制方法的状态示意图。
图28是本发明某些实施方式的智能镜子的模块示意图。
具体实施方式
下面详细描述本发明的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。
请参阅图1至3,本发明实施方式的控制方法用于控制智能镜子100。智能镜子100包括摄像头20。控制方法包括以下步骤:
S12:控制摄像头20拍摄当前用户;
S14:判断当前用户是否为注册用户;
S16:在当前用户为注册用户时,控制当前用户登录智能镜子100;和
S18:根据当前用户的输入控制智能镜子100与当前用户产生交互并输出交互信息。
请再参阅图2,本发明实施方式的控制方法可以由本发明实施方式的控制器10实现。本发明实施方式的控制器10包括控制装置12、判断装置14、登录装置16和交互装置18。步骤S12可以由控制装置12实现,步骤S14可以由判断装置14实现,步骤S16可以由登录装置16实现,步骤S18可以由交互装置18实现。
也即是说,控制装置12用于控制摄像头20拍摄当前用户;判断装置14用于判断当前用户是否为注册用户;登录装置16用于在当前用户为注册用户时,控制当前用户登录智能镜子100;交互装置18用于根据当前用户的输入控制智能镜子100与当前用户产生交互并输出交互信息。
本发明实施方式的控制器10应用于本发明实施方式的智能镜子100。也即是说,本发明实施方式的智能镜子100包括本发明实施方式的控制器10。本发明实施方式的智能镜子100还包括摄像头20。其中,摄像头20和控制器10电连接。
目前大多数智能镜子100只能用于显示天气或短信息,与用户之间的互动功能较少,智能镜子100的使用功能较为局限,用户的使用体验较差。
本发明实施方式的控制方法在用户成功登录后可以为用户提供多种娱乐互动及指导功能,如美妆美颜、卡通形象渲染等。
具体地,使用智能镜子100的当前用户需要成功登录智能镜子100才能行使对智能镜子100的多种娱乐互动及指导功能的使用权。也即是说,当且仅当当前用户为注册用户时才能使用智能镜子100的娱乐互动及指导功能。如此,可以保护注册用户的个人资料和隐私,提升智能镜子100使用的信息安全性。此外,每个注册用户可以设置不同的智能镜子100的使用风格,如此,注册用户登录智能镜子100后智能镜子100会显示与当前的注册用户相应的使用风格,进一步提升用户的使用体验。
当前用户成功登录智能镜子100后,智能镜子100会显示交互功能的使用界面。当前用户点击使用界面中的内容即可实现与智能镜子100的交互。
综上所述,本发明实施方式的控制方法、控制器10及智能镜子100在用户成功登录后可以为用户提供多种互动功能。如此,进一步丰富智能镜子100的使用功能,满足用户的智能生活的需求,提升用户的使用体验。
请一并参阅图2和图4,在某些实施方式中,智能镜子100包括注册库。注册库包括所有注册用户的注册人脸区域的注册特征信息。步骤S14判断当前用户是否为注册用户包括:
S141:处理摄像头20拍摄的当前用户的第一图像以获取当前用户的待测人脸区域;
S142:处理待测人脸区域以获取待测人脸区域的待测特征点;
S143:处理待测特征点以提取待测人脸区域的特征信息;
S144:将待测特征信息与注册特征信息进行比对以得到比对结果;
S145:在比对结果大于预定阈值时确认当前用户为注册用户。
请参阅图5,在某些实施方式中,判断装置14包括第一处理单元141、第二处理单元142、第三处理单元143、比对单元144和第一确认单元145。步骤S141可以由第一处理单 元141实现,步骤S142可以由第二处理单元142实现,步骤S143可以由第三处理单元143实现,步骤S144可以由比对单元144实现,步骤S145可以由第一确认单元145实现。
也即是说,第一处理单元141用于处理摄像头20拍摄的当前用户的第一图像以获取当前用户的待测人脸区域;第二处理单元142用于处理待测人脸区域以获取待测人脸区域的待测特征点;第三处理单元143用于处理待测特征点以提取待测人脸区域的特征信息;比对单元144用于将待测特征信息与注册特征信息进行比对以得到比对结果;第一确认单元145用于在比对结果大于预定阈值时确认当前用户为注册用户。
具体地,待测特征点包括待测人脸区域的眼睛、鼻子、嘴巴、脸部轮廓线等特征点。注册特征信息或待测特征信息包括注册用户或当前用户的脸部的特征信息,如眼睛、鼻子、嘴巴等的相对位置和距离,眼睛、鼻子、嘴巴的视点、大小等。将当前用户的待测特征信息与注册用户的注册特征信息进行比对,在比对结果大于预定阈值时说明当前用户与注册用户的人脸的匹配度较高,从而可以判断当前用户为注册用户。确认当前用户为注册用户后,当前用户成功登录智能镜子100。
如此,智能镜子100仅为注册用户提供丰富的使用功能,保证了注册用户的信息安全。
在某些实施方式中,注册用户可以自行设定智能镜子100的使用风格,如显示界面的颜色、背景图案等。如此,在当前用户成功登录智能镜子100之后,智能镜子100可以显示当前用户喜欢的使用风格,进一步提升用户体验。
在本发明的实施方式中,当前用户的登录验证是通过人脸识别进行验证。在其他实施例中,当前用户的登录验证也可以通过语音识别、指纹识别、虹膜识别等方式进行验证。
请一并参阅图2和图6,在某些实施方式中,本发明实施方式的控制方法还包括:
S111:控制摄像头20拍摄注册用户;和
S112:根据注册用户的输入建立注册用户的个人记录档案。
请参阅图7,在某些实施方式中,控制器10还包括建立装置11。步骤S111可以由控制装置12实现,步骤S112可以由建立装置11实现。
也即是说,控制装置12还用于控制摄像头20拍摄注册用户;建立装置12用于根据注册用户的输入建立注册用户的个人记录档案。
具体地,摄像头20拍摄注册用户后,智能镜子100对拍摄后的注册用户的图像进行处理以获取注册用户的注册特征点,并将注册特征点存储在注册库中以用于后续的识别匹配登录。注册用户可以在智能镜子100上进行编辑输入以建立自己的个人记录档案。个人记录档案包括注册用户的昵称、头像、个性签名等。注册用户还可以制定自己的卡通形象并存储在个人记录档案中。在确认当前用户为注册用户即当前用户成功登录智能镜子100后,智能镜子100会显示当前用户的个人记录档案中的全部信息或显示当前用户的个人记录档案中的部分信息。
需要说明的是,当前用户与智能镜子100产生交互后,当前用户可以选择保存输出的交互信息。此时,保存的交互信息也会存储在个人记录档案中。用户可通过个人记录档案查看自己的保存的交互信息和/或历史交互内容等。如此,可以进一步提升用户的使用体验。
请一并参阅图2和图8,在某些实施方式中,本发明实施方式的控制方法包括:
S171:控制摄像头20拍摄当前用户的第二图像;和
S172:控制智能镜子100显示第二图像。
请再参阅图2,在某些实施方式中,步骤S171和步骤S172可以由控制装置12实现。
也即是说,控制装置12还用于:
控制摄像头20拍摄当前用户的第二图像;和
控制智能镜子100显示第二图像。
如此,摄像头20拍摄的第一图像用于人脸识别的登录验证。摄像头20拍摄的第二图像用于当前用户与智能镜子100的交互。
请参阅图9,在某些实施方式中,交互包括对第二图像进行萌颜处理。智能镜子包括萌颜素材库。步骤S18根据当前用户的输入控制智能镜子100与当前用户产生交互并输出交互信息包括:
S1811:处理第二图像以获取当前用户的萌颜人脸区域;
S1812:处理萌颜人脸区域以获取萌颜人脸区域的萌颜特征点;
S1813:根据当前用户输入确定萌颜素材;和
S1814:根据萌颜特征点将萌颜素材与第二图像进行匹配融合处理以得到萌颜图像。
请一并参阅图5和图10,在某些实施方式中,交互装置18包括第二确认单元181和第四处理单元182。步骤S1811可以由第一处理单元141实现,步骤S1812可以由第二处理单元142实现,步骤S1813可以由第二确认单元181实现,步骤S1814可以由第四处理单元182实现。
也即是说,第一处理单元141还用于处理第二图像以获取当前用户的萌颜人脸区域;第二处理单元142还用于处理萌颜人脸区域以获取萌颜人脸区域的萌颜特征点;第二确认单元181用于根据当前用户输入确定萌颜素材;第四处理单元182用于根据萌颜特征点将萌颜素材与第二图像进行匹配融合处理以得到萌颜图像。
请参阅图11,具体地,萌颜特征点包括眼睛、鼻子、嘴巴、耳朵、头发等特征点。萌颜处理是在当前用户的萌颜人脸区域上,依据检测得到的萌颜特征点进行装饰物效果的叠加,如在脸部叠加可爱的表情、头部叠加发卡等虚拟装饰、头部叠加动物耳朵、鼻子位置叠加动物鼻子、脸颊位置叠加动物胡须等。萌颜素材可由用户自行指定。在得到萌颜图像后,智能镜子100通过采取动态逐帧显示萌颜图像的方式,形成趣味性强的动画特效。
请参阅图12,在某些实施方式中,交互包括对第二图像进行美颜处理。步骤S18根据当前用户的输入控制智能镜子100与当前用户产生交互并输出交互信息包括:
S1821:处理第二图像以获取当前用户的美颜人脸区域;
S1822:处理美颜人脸区域以获取美颜人脸区域的美颜特征点;和
S1823:根据当前用户的输入和美颜特征点对第二图像进行美颜处理以得到美颜图像。
请再参阅图5和图10,在某些实施方式中,交互装置18包括第四处理单元182。步骤S1821可以由第一处理单元141实现,步骤S1822可以由第二处理单元142实现,步骤S183可以由第四处理单元182实现。
也即是说,第一处理单元141还用于处理第二图像以获取当前用户的美颜人脸区域;第二处理单元142还用于处理美颜人脸区域以获取美颜人脸区域的美颜特征点;第四处理单元182用于根据当前用户的输入和美颜特征点对第二图像进行美颜处理以得到美颜图像。
美颜处理包括美白滤镜、红润滤镜、瘦脸模块和大眼模块中的一种或几种。美颜特征 点包括脸部、眼睛等。用户通过点选美颜处理的操作选项可实现对第二图像的美颜处理。例如,如图13所示,用户点选美白滤镜后,第四处理单元182会对第二图像中的美颜人脸区域进行脸部的美白处理。如此,用户可以自主选择美颜功能以对第二图像进行美颜处理,处理后的美颜图像会显示在智能镜子100上。用户可以观看到智能镜子100中显示的自身的个人形象,提升用户的观感体验。
进一步地,智能镜子20以动态帧形式逐帧显示美颜图像。也即是说,摄像头20会实时拍摄当前用户以获取当前用户的美颜特征点,并对实时拍摄得到的图像进行美颜处理。如此,即使当前用户处于运动状态,如头部旋转一定角度等,智能镜子100中也会实时显示当前用户美颜后的图像。
请参阅图14,在某些实施方式中,交互包括对第二图像进行虚拟试妆处理。智能镜子包括妆容素材库。步骤S18根据当前用户的输入控制智能镜子100与当前用户产生交互并输出交互信息包括:
S1831:处理第二图像以获取当前用户的试妆人脸区域;
S1832:处理试妆人脸区域以获取试妆人脸区域的试妆特征点;
S1833:根据当前用户的输入确定妆容素材;和
S1834:根据试妆特征点将妆容素材与第二图像进行匹配融合处理以得到虚拟试妆图像。
请再参阅图5和图10,在某些实施方式中,交互装置18包括第二确认单元181和第四处理单元182。步骤S1831可以由第一处理单元141实现,步骤S1832可以由第二处理单元142实现,步骤S1833可以由第二确认单元181实现,步骤S1834可以由第四处理单元184实现。
也即是说,第一处理单元141还用于处理第二图像以获取当前用户的试妆人脸区域;第二处理单元142还用于处理试妆人脸区域以获取试妆人脸区域的试妆特征点;第二确认单元181用于根据当前用户的输入确定妆容素材;第四处理单元182用于根据试妆特征点将妆容素材与第二图像进行匹配融合处理以得到虚拟试妆图像。
妆容素材包括眼影素材、眼线素材、腮红素材、唇彩素材和眉毛素材中的一种或几种。试妆特征点包括眼睛、鼻子、眉毛、脸颊等特征点。当前用户在智能镜子100的操作界面点选相应的妆容素材后,第四处理单元182即根据已确定的试妆特征点将当前用户点选的妆容素材与第二图像进行匹配融合处理。智能镜子100会显示匹配融合处理后的虚拟试妆图像。例如,如图15所示,当前用户点击眼影素材及唇彩素材后,智能镜子100会显示经处理后的虚拟试妆图像。实际使用中,当前用户在进行化妆时可以参考智能镜子100中显示的虚拟试妆图像来确定自己喜欢的妆容。如此,用户与智能镜子100的互动增强,用户的使用体验得到提升。
进一步地,智能镜子100以动态帧形式逐帧显示虚拟试妆图像。即使当前用户处于运动状态,智能镜子100仍旧能够持续显示出虚拟试妆处理后的虚拟试妆图像。
请参阅图16,在某些实施方式中,交互包括对第二图像进行2D面具渲染处理。智能镜子100包括2D面具素材库。步骤S18根据当前用户的输入控制智能镜子100与当前用户产生交互并输出交互信息包括:
S1841:处理第二图像以获取当前用户的2D面具渲染人脸区域;
S1842:处理2D面具渲染人脸区域以获取2D面具渲染人脸区域的2D面具渲染特征点;
S1843:根据当前用户的输入确定2D面具素材;和
S1844:根据2D面具渲染特征点将2D面具素材与第二图像进行匹配融合处理以得到2D面具渲染图像。
请再参阅图5和图10,在某些实施方式中,交互装置18包括第二确认单元181和第四处理单元182。步骤S1841可以由第一处理单元141实现,步骤S1842可以由第二处理单元142实现,步骤S1843可以由第二确认单元181实现,步骤S1844可以由第四处理单元182实现。
也即是说,第一处理单元141还用于处理第二图像以获取当前用户的2D面具渲染人脸区域;第二处理单元142还用于处理2D面具渲染人脸区域以获取2D面具渲染人脸区域的2D面具渲染特征点;第二确认单元181用于根据当前用户的输入确定2D面具素材;第四确认单元182用于根据2D面具渲染特征点将2D面具素材与第二图像进行匹配融合处理以得到2D面具渲染图像。
具体地,2D面具渲染特征点主要包括眼睛、鼻子、嘴巴。2D面具素材包括经典白色面具、京剧脸谱、动物脸谱、卡通形象脸谱等。当前用户点选2D面具素材后,第四处理单元182即将2D面具素材与第二图像进行匹配融合处理。智能镜子100会显示匹配融合处理后的2D面具渲染图像。如图17所示,用户点击白色面具的2D面具素材,智能镜子100显示处理后的2D面具渲染图像。如此,用户可以直观感受自己戴上面具后的效果。增加智能镜子100的使用的趣味性。
进一步地,智能镜子100以动态帧形式逐帧显示2D面具渲染图像。用户头部运动时,2D面具仍旧能与2D面具渲染人脸区域实现匹配。用户可以动态地进行渲染效果的观看。智能镜子100可以为用户提供真实地戴上面具在照镜子的使用感受。
请参阅图18,在某些实施方式中,交互包括对第二图像进行3D卡通形象渲染处理。智能镜子100包括3D引擎、通用3D人脸模型和3D卡通形象素材库。步骤S18根据当前用户的输入控制智能镜子100与当前用户产生交互并输出交互信息包括:
S1851:处理第二图像以获取当前用户的3D卡通形象渲染人脸区域;
S1852:处理3D卡通形象渲染人脸区域以获取3D卡通形象渲染人脸区域的3D卡通形象渲染特征点;
S1853:根据通用3D人脸模型和3D卡通形象渲染特征点获取当前用户的3D卡通形象渲染姿态参数;
S1854:根据当前用户的输入确定3D卡通形象素材;和
S1855:控制3D引擎根据第一姿态参数和3D卡通形象素材对第二图像进行3D卡通形象渲染处理。
请一并参阅图5和图19,交互装置18包括获取单元183、第二确认单元181和第四处理单元182。步骤S1851可以由第一处理单元141实现,步骤S1852可以由第二处理单元142实现,步骤S1853可以由获取单元183实现,步骤S1854可以由第二确认单元181实现,步骤S1855可以由第四处理单元182实现。
也即是说,第一获取单元141还用于处理第二图像以获取当前用户的3D卡通形象渲 染人脸区域;第二处理单元142还用于处理3D卡通形象渲染人脸区域以获取3D卡通形象渲染人脸区域的3D卡通形象渲染特征点;获取单元183用于根据3D人脸模型和3D卡通形象渲染特征点获取当前用户的3D卡通形象渲染姿态参数;第二确认单元181用于根据当前用户的输入确定3D卡通形象素材;第四处理单元182用于控制3D引擎根据第一姿态参数和3D卡通形象素材对第二图像进行3D卡通形象渲染处理。
具体地,对第二图像进行3D卡通形象渲染处理指的是获取第二图像中的人物的动作,并控制3D卡通形象对该人物的动作进行模仿跟随。3D卡通形象素材库包括多种3D卡通形象素材,如海绵宝宝、叮当猫、功夫熊猫、维尼熊等。3D卡通形象渲染特征点包括3D卡通形象渲染人脸区域的眼睛、鼻子、嘴巴、头部等。第一姿态参数包括头部的偏转角度、眼睛的闭合、嘴巴的动作等。通用3D人脸模型与3D卡通形象渲染特征点的匹配用于将摄像头20拍摄的2D平面图像转换成3D立体姿态参数即第一姿态参数。如此,在当前用户点选3D卡通形象素材后,3D引擎才能根据第一姿态参数和3D卡通素材对第二图像进行3D卡通形象渲染处理,以实现3D卡通形象根据当前用户的头部及面部动作进行角色动作的3D立体展示效果。如图20所示,当前用户选择的3D卡通素材为叮当猫,当用户做张大眼睛并开口大笑的动作,叮当猫同时进行张大眼睛并开口大笑的动作,实现实时模仿跟随。如此,用户与智能镜子100之间互动的趣味性大大提升。
在某些实施方式中,获得的3D卡通形象渲染特征点也可与通用3D人脸模型库中的某一通用3D人脸模型进行匹配以获取3D立体姿态参数。其中,通用3D人脸模型库中存储有不同形状的通用3D人脸。如此,可根据不同用户的头部、脸部、五官等的差异性选取不同的通用3D人脸模型进行匹配,从而提升3D立体姿态参数的准确度以进一步优化3D卡通形象渲染的效果,使3D卡通形象的模仿跟随更精确。
请参阅图21,在某些实施方式中,交互包括对第二图像进行虚拟眼镜渲染处理。智能镜子100包括3D引擎、通用3D人脸模型和虚拟眼镜素材库。步骤S18根据当前用户的输入控制智能镜子100与当前用户产生交互并输出交互信息包括:
S1861:处理第二图像以获取当前用户的虚拟眼镜渲染第二人脸区域;
S1862:处理虚拟眼镜渲染人脸区域以获取虚拟眼镜渲染人脸区域的虚拟眼镜渲染特征点;
S1863:根据通用3D人脸模型和虚拟眼镜渲染特征点获取当前用户的虚拟眼镜渲染姿态参数;
S1864:根据当前用户的输入确定虚拟眼镜素材;和
S1865:控制3D引擎根据第二姿态参数和虚拟眼镜素材对第二图像进行虚拟眼镜渲染处理。
请再参阅图5和图19,在某些实施方式中,交互装置18包括获取单元183、第二确认单元181和第四处理单元182。步骤S1861可以由第一处理单元141实现,步骤S1862可以由第二处理单元142实现,步骤S1863可以由获取单元183实现,步骤S1864可以由第二确认单元181实现,步骤S1865可以由第四处理单元182实现。
也即是说,第一处理单元141还用于处理第二图像以获取当前用户的虚拟眼镜渲染第二人脸区域;第二处理单元142还用于处理虚拟眼镜渲染人脸区域以获取虚拟眼镜渲染人脸区域的虚拟眼镜渲染特征点;获取单元183用于根据通用3D人脸模型和虚拟眼镜渲染 特征点获取当前用户的虚拟眼镜渲染姿态参数;第二确认单元181用于根据当前用户的输入确定虚拟眼镜素材;第四处理单元182用于控制3D引擎根据第二姿态参数和虚拟眼镜素材对第二图像进行虚拟眼镜渲染处理。
具体地,对第二图像进行虚拟眼镜渲染处理指的是为第二图像中的人物戴上虚拟眼镜,虚拟眼镜可随第二图像中的人物的头部的动作而动作,实现模仿跟随。虚拟眼镜素材库包括多种不同颜色及造型的虚拟眼镜素材。虚拟眼镜渲染特征点主要包括虚拟眼镜渲染人脸区域的头部及眼睛部分。虚拟眼镜渲染姿态参数包括头部及眼睛的运动情况。通用3D人脸模型与虚拟眼镜渲染特征点的匹配用于将摄像头20拍摄的2D平面图像转换成3D的立体姿态参数即第二姿态参数。如此,在当前用户点选虚拟眼镜素材后,3D引擎才能根据第二姿态参数和虚拟眼镜素材对第二图像进行虚拟眼镜渲染处理。随后当前用户在智能镜子100中看到自己戴上眼镜后的3D立体展示效果。当前用户头部及眼睛运动后,虚拟眼镜也能随头部实时运动,从而实现虚拟眼镜与眼睛部分的精确匹配。如图22所示,用户点选的虚拟眼镜素材为粗框黑色眼镜,智能镜子100显示用户佩戴该粗框黑色眼镜后的图像。用户头部旋转时,该粗框黑色眼镜同样能与用户的眼部精确匹配。如此,用户可参考虚拟眼镜渲染处理后的效果选择合适自己的眼镜的款式进行佩戴,进一步增加智能镜子100的使用功能和实用性。同时也可增加智能镜子100使用的趣味性。
在某些实施方式中,获得的虚拟眼镜渲染特征点也可与通用3D人脸模型库中的某一通用3D人脸模型进行匹配以获取3D的立体姿态参数。其中,通用3D人脸模型库中存储有不同形状的通用3D人脸。如此,可根据不同用户的头部、脸部、五官等的差异性选取不同的通用3D人脸模型进行匹配,从而提升3D立体姿态参数的准确度以进一步优化虚拟眼镜渲染的效果,使虚拟眼镜与用户眼部的匹配更精确。
请参阅图23,在某些实施方式中,交互包括对第二图像进行虚拟发型渲染处理,智能镜子100包括3D引擎、通用3D人脸模型和虚拟发型素材库。步骤S18根据当前用户的输入控制智能镜子100与当前用户产生交互并输出交互信息包括:
S1871:处理第二图像以获取当前用户的虚拟发型渲染人脸区域;
S1872:处理虚拟发型渲染人脸区域以获取虚拟发型渲染人脸区域的虚拟发型渲染特征点;
S1873:根据3D人脸模型和虚拟发型渲染特征点获取当前用户的第三姿态参数;
S1874:根据当前用户的输入确定虚拟发型素材;和
S1875:控制3D引擎根据第三姿态参数和虚拟发型素材对第二图像进行虚拟发型渲染处理。
请再参阅图5和图19,在某些实施方式中,交互装置18包括获取单元183、第二确认单元181和第四处理单元182。步骤S1871可以由第一处理单元141实现,步骤S1872可以由第二处理单元142实现,步骤S1873可以由获取单元183实现,步骤S1874可以由第二确认单元181实现,步骤S1875可以由第四处理单元182实现。
也即是说,第一处理单元141还用于处理第二图像以获取当前用户的虚拟发型渲染人脸区域;第二处理单元142还用于处理虚拟发型渲染人脸区域以获取虚拟发型渲染人脸区域的虚拟发型渲染特征点;获取单元184用于根据3D人脸模型和虚拟发型渲染特征点获取当前用户的第三姿态参数;第二确认单元181用于根据当前用户的输入确定虚拟发型素 材;第四处理单元182用于控制3D引擎根据第三姿态参数和虚拟发型素材对第二图像进行虚拟发型渲染处理。
具体地,对第二图像进行虚拟发型渲染处理指的是对第二图像中的人物进行虚拟发型佩戴,虚拟发型可跟随人物头部的动作而动作。虚拟发型素材库包括多种不同造型及颜色的虚拟发型素材。虚拟发型渲染特征点主要包括当前用户的头部部分。虚拟发型渲染姿态参数包括头部的运动情况。通用3D人脸模型与虚拟发型渲染特征点的匹配用于将摄像头20拍摄的2D平面图像转换成3D的立体姿态参数即第三姿态参数。如此,在当前用户点选虚拟发型素材后,3D引擎才能根据第三姿态参数和虚拟发型素材对第二图像进行虚拟发型渲染处理。随后当前用户可在智能镜子100中看到试戴虚拟发型的3D立体展示效果。当前用户头部运动后,虚拟发型也能随头部实时运动,从而实现虚拟发型与头部的精确匹配。如图24所示,当前用户点选的虚拟发型素材为短发,智能镜子100显示用户戴上该款短发后的图像。用户头部旋转时,该款短发同样能与用户的头部精确匹配。如此,用户可参考虚拟发型渲染处理后的效果选择合适自己的发型风格,同时增加了智能镜子100的实用性和使用的趣味性。
在某些实施方式中,获得的虚拟发型渲染特征点也可与通用3D人脸模型库中的某一通用3D人脸模型进行匹配以获取3D的立体姿态参数。其中,通用3D人脸模型库中存储有不同形状的通用3D人脸。如此,可根据不同用户的头部、脸部、五官等的差异性选取不同的通用3D人脸模型进行匹配,从而提升3D立体姿态参数的准确度以进一步优化虚拟发型渲染的效果,使虚拟发型与用户头部的匹配更精确。
请参阅图25,在某些实施方式中,交互还包括对当前用户提供日常生活护理指导。步骤S18根据当前用户的输入控制智能镜子100与当前用户产生交互并输出交互信息包括:
S188:根据用户的输入为当前用户提供日常生活护理指导。
请参阅图26,在某些实施方式中,交互装置18还包括指导单元185。步骤S188可以由指导单元185实现。
也即是说,指导单元185用于根据用户的输入为当前用户提供日常生活护理指导。
具体地,日常生活护理指导包括教导用户如何正确刷牙、正确洗脸、进行脸部按摩等。如图27所示,当用户点选刷牙的日常生活护理指导内容时,智能镜子100会以视频或图片的形式进行刷牙的日常生活护理指导内容的显示。如此,增加了智能镜子100的实用性。
需要说明的是,上述实施例中,萌颜处理、虚拟试妆、2D面具渲染处理、3D卡通形象渲染处理、虚拟眼镜渲染处理、虚拟发型渲染处理中的第二确认单元181是一样的。也即是说,第二确认单元可以执行步骤S1813、步骤S1833、步骤S1843、步骤S1854、步骤S1865、和/或步骤S1874的内容。萌颜处理、美颜处理、虚拟试妆、2D面具渲染处理、3D卡通形象渲染处理、虚拟眼镜渲染处理、虚拟发型渲染处理中的第四处理单元182是一样的。也即是说,第四处理单元182可以执行步骤S1814、步骤S1823、步骤S1834、步骤S1844、步骤S1855、步骤S1865和/或步骤S1875的内容。
此外,本发明实施方式的控制方法、控制器10和智能镜子100可同时或先后执行萌颜处理、虚拟试妆、2D面具渲染处理、3D卡通形象渲染处理、虚拟眼镜渲染处理或虚拟发型渲染处理中的一种或几种。例如,控制器10可同时对第二图像进行萌颜处理、美颜处理、3D卡通形象渲染处理。控制器10也可对第二图像进行以美颜处理、虚拟试妆、虚拟眼镜 渲染处理、虚拟发型渲染处理为先后顺序的图像处理。在某些实施方式中,各个图像处理方式的处理顺序可随意更换。
请参阅图28,本发明实施方式的智能镜子100包括一个或多个处理器30、存储器40以及一个或多个程序41。其中,一个或多个程序41被存储在存储器40中,并且被配置由一个或多个处理器30执行。程序41包括用于执行上述任一实施方式的控制方法的指令。
例如,程序41包括用于执行以下步骤的指令:
S12:控制摄像头20拍摄当前用户;
S14:判断当前用户是否为注册用户;
S16:在当前用户为注册用户时,控制当前用户登录智能镜子100;和
S18:根据当前用户的输入控制智能镜子100与当前用户产生交互并输出交互信息。
本发明实施方式的计算机可读存储介质包括与能够显示画面的电子装置结合使用的计算机程序。计算机程序可被处理器执行以完成上述任一实时方式所述的控制方法。
例如,处理器可用于执行以下步骤:
S12:控制摄像头20拍摄当前用户;
S14:判断当前用户是否为注册用户;
S16:在当前用户为注册用户时,控制当前用户登录智能镜子100;和
S18:根据当前用户的输入控制智能镜子100与当前用户产生交互并输出交互信息。
综上所述,本发明实施方式的控制方法、控制器10、智能镜子100和计算机可读存储介质可为注册用户提供包括萌颜处理、美颜处理、虚拟试妆、2D面具渲染处理、3D卡通形象渲染处理、虚拟眼镜渲染处理、虚拟发型渲染处理、日常生活护理指导等多种互动功能。如此,智能镜子100的使用功能增加,实用性更高,智能镜子100使用的趣味性及用户的使用体验也得到提升。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本发明的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实 现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (31)

  1. 一种控制方法,用于控制智能镜子,其特征在于,所述智能镜子包括摄像头,所述控制方法包括:
    控制所述摄像头拍摄当前用户;
    判断所述当前用户是否为注册用户;
    在所述当前用户为注册用户时,控制所述当前用户登录所述智能镜子;和
    根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息。
  2. 根据权利要求1所述的控制方法,其特征在于,所述智能镜子包括注册库,所述注册库包括所有所述注册用户的注册人脸区域的注册特征信息;所述判断所述当前用户是否为注册用户的步骤包括:
    处理所述摄像头拍摄的所述当前用户的第一图像以获取所述当前用户的待测人脸区域;
    处理所述待测人脸区域以获取所述待测人脸区域的待测特征点;
    处理所述待测特征点以提取所述待测人脸区域的特征信息;
    将所述待测特征信息与所述注册特征信息进行比对以得到比对结果;和
    在所述比对结果大于预定阈值时确认所述当前用户为注册用户。
  3. 根据权利要求2所述的控制方法,其特征在于,所述控制方法包括:
    控制所述摄像头拍摄所述当前用户的第二图像;和
    控制所述智能镜子显示所述第二图像。
  4. 根据权利要求3所述的控制方法,其特征在于,所述交互包括对所述第二图像进行萌颜处理,所述智能镜子包括萌颜素材库;所述根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息的步骤包括:
    处理所述第二图像以获取所述当前用户的萌颜人脸区域;
    处理所述萌颜人脸区域以获取所述萌颜人脸区域的萌颜特征点;
    根据所述当前用户输入确定萌颜素材;和
    根据所述萌颜特征点将所述萌颜素材与所述第二图像进行匹配融合处理以得到萌颜图像。
  5. 根据权利要求3所述的控制方法,其特征在于,所述交互包括对所述第二图像进行美颜处理,所述根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息的步骤包括:
    处理所述第二图像以获取所述当前用户的美颜人脸区域;
    处理所述美颜人脸区域以获取所述美颜人脸区域的美颜特征点;和
    根据所述当前用户的输入和所述美颜特征点对所述第二图像进行美颜处理以得到美颜图像。
  6. 根据权利要求5所述的控制方法,其特征在于,所述美颜处理包括美白滤镜、红润滤镜、瘦脸模块和大眼模块中的一种或几种。
  7. 根据权利要求3所述的控制方法,其特征在于,所述交互包括对所述第二图像进行虚拟试妆处理,所述智能镜子包括妆容素材库;所述根据所述当前用户的输入控 制所述智能镜子与所述当前用户产生交互并输出交互信息的步骤包括:
    处理所述第二图像以获取所述当前用户的试妆人脸区域;
    处理所述试妆人脸区域以获取所述试妆人脸区域的试妆特征点;
    根据所述当前用户的输入确定妆容素材;和
    根据所述试妆特征点将所述妆容素材与所述第二图像进行匹配融合处理以得到虚拟试妆图像。
  8. 根据权利要求7所述的控制方法,其特征在于,所述妆容素材包括眼影素材、眼线素材、腮红素材、唇彩素材和眉毛素材中的一种或几种。
  9. 根据权利要求3所述的控制方法,其特征在于,所述交互包括对所述第二图像进行2D面具渲染处理,所述智能镜子包括2D面具素材库;所述根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息的步骤包括:
    处理所述第二图像以获取所述当前用户的2D面具渲染人脸区域;
    处理所述2D面具渲染人脸区域以获取所述2D面具渲染人脸区域的2D面具渲染特征点;
    根据所述当前用户的输入确定2D面具素材;和
    根据所述2D面具渲染特征点将所述2D面具素材与所述第二图像进行匹配融合处理以得到2D面具渲染图像。
  10. 根据权利要求3所述的控制方法,其特征在于,所述交互包括对所述第二图像进行3D卡通形象渲染处理,所述智能镜子包括3D引擎、通用3D人脸模型和3D卡通形象素材库;所述根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息的步骤包括:
    处理所述第二图像以获取所述当前用户的3D卡通形象渲染人脸区域;
    处理所述3D卡通形象渲染人脸区域以获取所述3D卡通形象渲染人脸区域的3D卡通形象渲染特征点;
    根据所述通用3D人脸模型和所述3D卡通形象渲染特征点获取所述当前用户的第一姿态参数;
    根据所述当前用户的输入确定3D卡通形象素材;和
    控制所述3D引擎根据所述第一姿态参数和所述3D卡通形象素材对所述第二图像进行3D卡通形象渲染处理。
  11. 根据权利要求3所述的控制方法,其特征在于,所述交互包括对所述第二图像进行虚拟眼镜渲染处理,所述智能镜子包括3D引擎、通用3D人脸模型和虚拟眼镜素材库;所述根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息的步骤包括:
    处理所述第二图像以获取所述当前用户的虚拟眼镜渲染人脸区域;
    处理所述虚拟眼镜渲染人脸区域以获取所述虚拟眼镜渲染人脸区域的虚拟眼镜渲染特征点;
    根据所述通用3D人脸模型和所述虚拟眼镜渲染特征点获取所述当前用户的第二姿态参数;
    根据所述当前用户的输入确定虚拟眼镜素材;和
    控制所述3D引擎根据所述第二姿态参数和所述虚拟眼镜素材对所述第二图像进行虚拟眼镜渲染处理。
  12. 根据权利要求3所述的控制方法,其特征在于,所述交互包括对所述第二图像进行虚拟发型渲染处理,所述智能镜子包括3D引擎、通用3D人脸模型和虚拟发型素材库;所述根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息的步骤包括:
    处理所述第二图像以获取所述当前用户的虚拟发型渲染人脸区域;
    处理所述虚拟发型渲染人脸区域以获取所述虚拟发型渲染人脸区域的虚拟发型渲染特征点;
    根据所述通用3D人脸模型和所述虚拟发型渲染特征点获取所述当前用户的第三姿态参数;
    根据所述当前用户的输入确定虚拟发型素材;和
    控制所述3D引擎根据所述第三姿态参数和所述虚拟发型素材对所述第二图像进行虚拟发型渲染处理。
  13. 根据权利要求1所述的控制方法,其特征在于,所述交互包括对所述当前用户提供日常生活护理指导,所述根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息的步骤包括:
    根据所述用户的输入为所述当前用户提供日常生活护理指导。
  14. 根据权利要求1所述的控制方法,其特征在于,所述控制方法还包括:
    控制所述摄像头拍摄所述注册用户;和
    根据所述注册用户的输入建立所述注册用户的个人记录档案。
  15. 一种控制器,用于控制智能镜子,其特征在于,所述智能镜子包括摄像头,所述控制器包括:
    控制装置,所述控制装置用于控制所述摄像头拍摄当前用户;
    判断装置,所述判断装置用于判断所述当前用户是否为注册用户;
    登录装置,所述登录装置用于在所述当前用户为注册用户时,控制所述当前用户登录所述智能镜子;和
    交互装置,所述交互装置用于根据所述当前用户的输入控制所述智能镜子与所述当前用户产生交互并输出交互信息。
  16. 根据权利要求15所述的控制器,其特征在于,所述智能镜子包括注册库,所述注册库包括所有所述注册用户的注册人脸区域的注册特征信息;所述判断装置包括:
    第一处理单元,所述第一处理单元用于处理所述摄像头拍摄的所述当前用户的第一图像以获取所述当前用户的待测人脸区域;
    第二处理单元,所述第二处理单元用于处理所述待测人脸区域以获取所述待测人脸区域的待测特征点;
    第三处理单元,所述第三处理单元用于处理所述待测特征点以提取所述待测人脸区域的特征信息;
    比对单元,所述比对单元用于将所述待测特征信息与所述注册特征信息进行比对以得到比对结果;和
    第一确认单元,所述第一确认单元用于在所述比对结果大于预定阈值时确认所述当前用户为注册用户。
  17. 根据权利要求16所述的控制器,其特征在于,所述控制装置还用于:
    控制所述摄像头拍摄所述当前用户的第二图像;和
    控制所述智能镜子显示所述第二图像。
  18. 根据权利要求17所述的控制器,其特征在于,所述交互包括对所述第二图像进行萌颜处理,所述智能镜子包括萌颜素材库;
    所述第一处理单元还用于处理所述第二图像以获取所述当前用户的萌颜人脸区域;
    所述第二处理单元还用于处理所述萌颜人脸区域以获取所述萌颜人脸区域的萌颜特征点;
    所述交互装置包括:
    第二确认单元,所述第二确认单元用于根据所述当前用户输入确定萌颜素材;和
    第四处理单元,所述第四处理单元用于根据所述萌颜特征点将所述萌颜素材与所述第二图像进行匹配融合处理以得到萌颜图像。
  19. 根据权利要求17所述的控制器,其特征在于,所述交互包括对所述第二图像进行美颜处理;
    所述第一处理单元还用于处理所述第二图像以获取所述当前用户的美颜人脸区域;
    所述第二处理单元还用于处理所述美颜人脸区域以获取所述美颜人脸区域的美颜特征点;
    所述交互装置包括:
    第四处理单元,所述第四处理单元用于根据所述当前用户的输入和所述美颜特征点对所述第二图像进行美颜处理以得到美颜图像。
  20. 根据权利要求19所述的控制器,其特征在于,所述美颜处理包括美白滤镜、红润滤镜、瘦脸模块和大眼模块中的一种或几种。
  21. 根据权利要求17所述的控制器,其特征在于,所述交互包括对所述第二图像进行虚拟试妆处理,所述智能镜子包括妆容素材库;
    所述第一处理单元还用于处理所述第二图像以获取所述当前用户的试妆人脸区域;
    所述第二处理单元还用于处理所述试妆人脸区域以获取所述试妆人脸区域的试妆特征点;
    所述交互装置包括:
    第二确认单元,所述第二确认单元用于根据所述当前用户的输入确定妆容素材;和
    第四处理单元,所述第四处理单元用于根据所述试妆特征点将所述妆容素材与所述第二图像进行匹配融合处理以得到虚拟试妆图像。
  22. 根据权利要求21所述的控制器,其特征在于,所述妆容素材包括眼影素材、眼线素材、腮红素材、唇彩素材和眉毛素材中的一种或几种。
  23. 根据权利要求17所述的控制器,其特征在于,所述交互包括对所述第二图像进行2D面具渲染处理,所述智能镜子包括2D面具素材库;
    所述第一处理单元还用于处理所述第二图像以获取所述当前用户的2D面具渲染人脸区域;
    所述第二处理单元还用于处理所述2D面具渲染人脸区域以获取所述2D面具渲染人脸区域的2D面具渲染特征点;
    所述交互装置包括:
    第二确认单元,所述第二确认单元用于根据所述当前用户的输入确定2D面具素材;和
    第四处理单元,所述第四处理单元用于根据所述2D面具渲染特征点将所述2D面具素材与所述第二图像进行匹配融合处理以得到2D面具渲染图像。
  24. 根据权利要求17所述的控制器,其特征在于,所述交互包括对所述第二图像进行3D卡通形象渲染处理,所述智能镜子包括3D引擎、通用3D人脸模型和3D卡通形象素材库;
    所述第一处理单元还用于处理所述第二图像以获取所述当前用户的3D卡通形象渲染人脸区域;
    所述第二处理单元还用于处理所述3D卡通形象渲染人脸区域以获取所述3D卡通形象渲染人脸区域的3D卡通形象渲染特征点;
    所述交互装置包括:
    获取单元,所述获取单元用于根据所述通用3D人脸模型和所述3D卡通形象渲染特征点获取所述当前用户的第一姿态参数;
    第二确认单元,所述第二确认单元用于根据所述当前用户的输入确定3D卡通形象素材;和
    第四处理单元,所述第四处理单元用于控制所述3D引擎根据所述第一姿态参数和所述3D卡通形象素材对所述第二图像进行3D卡通形象渲染处理。
  25. 根据权利要求17所述的控制器,其特征在于,所述交互包括对所述第二图像进行虚拟眼镜渲染处理,所述智能镜子包括3D引擎、通用3D人脸模型和虚拟眼镜素材库;
    所述第一处理单元还用于处理所述第二图像以获取所述当前用户的虚拟眼镜渲染第二人脸区域;
    所述第二处理单元还用于处理所述虚拟眼镜渲染人脸区域以获取所述虚拟眼镜渲染人脸区域的虚拟眼镜渲染特征点;
    所述交互装置包括:
    获取单元,所述获取单元用于根据所述通用3D人脸模型和所述虚拟眼镜渲染特征点获取所述当前用户的第二姿态参数;
    第二确认单元,所述第二确认单元用于根据所述当前用户的输入确定虚拟眼镜素材;和
    第四处理单元,所述第四处理单元用于控制所述3D引擎根据所述第二姿态参数和所述虚拟眼镜素材对所述第二图像进行虚拟眼镜渲染处理。
  26. 根据权利要求17所述的控制器,其特征在于,所述交互包括对所述第二图像进行虚拟发型渲染处理,所述智能镜子包括3D引擎、通用3D人脸模型和虚拟发型素材库;
    所述第一处理单元还用于处理所述第二图像以获取所述当前用户的虚拟发型渲染人脸区域;
    所述第二处理单元还用于处理所述虚拟发型渲染人脸区域以获取所述虚拟发型渲染人脸区域的虚拟发型渲染特征点;
    所述交互装置包括:
    获取单元,所述获取单元用于根据所述通用3D人脸模型和所述虚拟发型渲染特征点获取所述当前用户的第三姿态参数;
    第二确认单元,所述第二确认单元用于根据所述当前用户的输入确定虚拟发型素材;和
    第四处理单元,所述第四处理单元用于控制所述3D引擎根据所述第三姿态参数和所述虚拟发型素材对所述第二图像进行虚拟发型渲染处理。
  27. 根据权利要求15所述的控制器,其特征在于,所述交互包括对所述当前用户提供日常生活护理指导,所述交互装置包括:
    指导单元,所述指导单元用于根据所述用户的输入为所述当前用户提供日常生活护理指导。
  28. 根据权利要求15所述的控制器,其特征在于,所述控制装置还用于:
    控制所述摄像头拍摄所述注册用户;
    所述控制器还包括建立装置,所述建立装置用于根据所述注册用户的输入建立所述注册用户的个人记录档案。
  29. 一种智能镜子,其特征在于,所述智能镜子包括:
    摄像头;和
    权利要求15至28任意一项所述的控制器,所述控制器与所述摄像头电连接。
  30. 一种智能镜子,其特征在于,所述智能镜子包括:
    一个或多个处理器;
    存储器;以及
    一个或多个程序,其中所述一个或多个程序被存储在所述存储器中,并且被配置由所述一个或多个处理器执行,所述程序包括用于执行权利要求1至14中任意一项所述的控制方法的指令。
  31. 一种计算机可读存储介质,包括与能够显示画面的电子装置结合使用的计算机程序,所述计算机程序可被处理器执行以完成权利要求1至14中任意一项所述的控制方法。
PCT/CN2017/087979 2017-06-12 2017-06-12 控制方法、控制器、智能镜子和计算机可读存储介质 WO2018227349A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
JP2018566586A JP2019537758A (ja) 2017-06-12 2017-06-12 制御方法、コントローラ、スマートミラー及びコンピュータ読み取り可能な記憶媒体
KR1020197003099A KR20190022856A (ko) 2017-06-12 2017-06-12 제어 방법, 제어기, 스마트 거울 및 컴퓨터 판독가능 저장매체
CN201780001849.2A CN107820591A (zh) 2017-06-12 2017-06-12 控制方法、控制器、智能镜子和计算机可读存储介质
EP17913946.4A EP3462284A4 (en) 2017-06-12 2017-06-12 CONTROL METHOD, CONTROL DEVICE, INTELLIGENT MIRROR AND COMPUTER READABLE STORAGE MEDIUM
PCT/CN2017/087979 WO2018227349A1 (zh) 2017-06-12 2017-06-12 控制方法、控制器、智能镜子和计算机可读存储介质
US16/234,174 US20190130652A1 (en) 2017-06-12 2018-12-27 Control method, controller, smart mirror, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/087979 WO2018227349A1 (zh) 2017-06-12 2017-06-12 控制方法、控制器、智能镜子和计算机可读存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/234,174 Continuation US20190130652A1 (en) 2017-06-12 2018-12-27 Control method, controller, smart mirror, and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2018227349A1 true WO2018227349A1 (zh) 2018-12-20

Family

ID=61606897

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/087979 WO2018227349A1 (zh) 2017-06-12 2017-06-12 控制方法、控制器、智能镜子和计算机可读存储介质

Country Status (6)

Country Link
US (1) US20190130652A1 (zh)
EP (1) EP3462284A4 (zh)
JP (1) JP2019537758A (zh)
KR (1) KR20190022856A (zh)
CN (1) CN107820591A (zh)
WO (1) WO2018227349A1 (zh)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108305317B (zh) * 2017-08-04 2020-03-17 腾讯科技(深圳)有限公司 一种图像处理方法、装置及存储介质
KR101972331B1 (ko) * 2017-08-29 2019-04-25 키튼플래닛 주식회사 영상 얼라인먼트 방법 및 그 장치
CN108937407A (zh) * 2018-05-25 2018-12-07 深圳市赛亿科技开发有限公司 一种智能镜子化妆指导方法及系统
CN109034063A (zh) * 2018-07-27 2018-12-18 北京微播视界科技有限公司 人脸特效的多人脸跟踪方法、装置和电子设备
CN109597480A (zh) * 2018-11-06 2019-04-09 北京奇虎科技有限公司 人机交互方法、装置、电子设备及计算机可读存储介质
CN109671142B (zh) * 2018-11-23 2023-08-04 南京图玩智能科技有限公司 一种智能美妆方法及智能美妆镜
CN109543646A (zh) * 2018-11-30 2019-03-29 深圳市脸萌科技有限公司 人脸图像处理方法、装置、电子设备及计算机存储介质
CN109875227B (zh) * 2019-01-22 2024-06-04 杭州小肤科技有限公司 多功能安全智能梳妆镜
US20200342987A1 (en) * 2019-04-26 2020-10-29 doc.ai, Inc. System and Method for Information Exchange With a Mirror
CN110941333A (zh) * 2019-11-12 2020-03-31 北京字节跳动网络技术有限公司 基于眼部动作的交互方法、装置、介质和电子设备
FI20207082A (fi) * 2020-05-11 2021-11-12 Dentview Oy Laite suuhygienian neuvontaan
CN111768479B (zh) * 2020-07-29 2021-05-28 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备以及存储介质
CN111882673A (zh) * 2020-07-29 2020-11-03 北京小米移动软件有限公司 一种显示控制方法、装置、包含显示屏的镜子及存储介质
CN114187649A (zh) * 2020-08-24 2022-03-15 华为技术有限公司 护肤辅助方法、设备及存储介质
KR20220028529A (ko) * 2020-08-28 2022-03-08 엘지전자 주식회사 스마트 미러 장치 및 방법과, 이의 시스템
CN112099712B (zh) * 2020-09-17 2022-06-07 北京字节跳动网络技术有限公司 人脸图像显示方法、装置、电子设备及存储介质
JP7414707B2 (ja) * 2020-12-18 2024-01-16 トヨタ自動車株式会社 画像表示システム
US11430281B1 (en) * 2021-04-05 2022-08-30 International Business Machines Corporation Detecting contamination propagation
CN113240799B (zh) * 2021-05-31 2022-12-23 上海速诚义齿有限公司 一种基于医疗大数据的牙齿三维模型构建系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095917A (zh) * 2015-08-31 2015-11-25 小米科技有限责任公司 图像处理方法、装置及终端
CN105426730A (zh) * 2015-12-28 2016-03-23 小米科技有限责任公司 登录验证处理方法、装置及终端设备
CN105956576A (zh) * 2016-05-18 2016-09-21 广东欧珀移动通信有限公司 一种图像美颜方法、装置及移动终端
CN106161962A (zh) * 2016-08-29 2016-11-23 广东欧珀移动通信有限公司 一种图像处理方法及终端
US20160357578A1 (en) * 2015-06-03 2016-12-08 Samsung Electronics Co., Ltd. Method and device for providing makeup mirror
CN106412458A (zh) * 2015-07-31 2017-02-15 中兴通讯股份有限公司 一种图像处理方法和装置
CN107340856A (zh) * 2017-06-12 2017-11-10 美的集团股份有限公司 控制方法、控制器、智能镜子和计算机可读存储介质

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7016824B2 (en) * 2001-02-06 2006-03-21 Geometrix, Inc. Interactive try-on platform for eyeglasses
JP2004234571A (ja) * 2003-01-31 2004-08-19 Sony Corp 画像処理装置、画像処理方法及び撮影装置
JP4645411B2 (ja) * 2005-10-28 2011-03-09 コニカミノルタホールディングス株式会社 認証システム、登録システム及びプログラム
JP2009064423A (ja) * 2007-08-10 2009-03-26 Shiseido Co Ltd メイクアップシミュレーションシステム、メイクアップシミュレーション装置、メイクアップシミュレーション方法およびメイクアップシミュレーションプログラム
US20090231356A1 (en) * 2008-03-17 2009-09-17 Photometria, Inc. Graphical user interface for selection of options from option groups and methods relating to same
US10872535B2 (en) * 2009-07-24 2020-12-22 Tutor Group Limited Facilitating facial recognition, augmented reality, and virtual reality in online teaching groups
US20110304629A1 (en) * 2010-06-09 2011-12-15 Microsoft Corporation Real-time animation of facial expressions
US9330483B2 (en) * 2011-04-11 2016-05-03 Intel Corporation Avatar facial expression techniques
US20130145272A1 (en) * 2011-11-18 2013-06-06 The New York Times Company System and method for providing an interactive data-bearing mirror interface
CN109288333B (zh) * 2012-12-18 2021-11-30 艾斯适配有限公司 捕获和显示外观的装置、系统和方法
JP6389888B2 (ja) * 2013-08-04 2018-09-12 アイズマッチ エルティーディー.EyesMatch Ltd. 鏡における仮想化の装置、システム、及び方法
CN108537628B (zh) * 2013-08-22 2022-02-01 贝斯普客公司 用于创造定制产品的方法和系统
CN104598445B (zh) * 2013-11-01 2019-05-10 腾讯科技(深圳)有限公司 自动问答系统和方法
CN105744854B (zh) * 2013-11-06 2020-07-03 皇家飞利浦有限公司 用于在剃刮过程中引导用户的系统和方法
JP2015111372A (ja) * 2013-12-06 2015-06-18 株式会社日立システムズ ヘアスタイル決定支援システムとヘアスタイル決定支援装置
JP6375755B2 (ja) * 2014-07-10 2018-08-22 フリュー株式会社 写真シール作成装置および表示方法
US9240077B1 (en) * 2014-03-19 2016-01-19 A9.Com, Inc. Real-time visual effects for a live camera view
JP6320143B2 (ja) * 2014-04-15 2018-05-09 株式会社東芝 健康情報サービスシステム
US9760935B2 (en) * 2014-05-20 2017-09-12 Modiface Inc. Method, system and computer program product for generating recommendations for products and treatments
US9881303B2 (en) * 2014-06-05 2018-01-30 Paypal, Inc. Systems and methods for implementing automatic payer authentication
EP3198561A4 (en) * 2014-09-24 2018-04-18 Intel Corporation Facial gesture driven animation communication system
CN105512599A (zh) * 2014-09-26 2016-04-20 数伦计算机技术(上海)有限公司 人脸识别方法及人脸识别系统
CN104223858B (zh) * 2014-09-28 2016-04-13 广州视睿电子科技有限公司 一种自识别智能镜子
CN104834849B (zh) * 2015-04-14 2018-09-18 北京远鉴科技有限公司 基于声纹识别和人脸识别的双因素身份认证方法及系统
KR101613038B1 (ko) * 2015-06-01 2016-04-15 김형민 맞춤형 광고를 표출하는 스마트 미러 시스템
US11741639B2 (en) * 2016-03-02 2023-08-29 Holition Limited Locating and augmenting object features in images
US20180137663A1 (en) * 2016-11-11 2018-05-17 Joshua Rodriguez System and method of augmenting images of a user
CN106682578B (zh) * 2016-11-21 2020-05-05 北京交通大学 基于眨眼检测的弱光人脸识别方法
CN106773852A (zh) * 2016-12-19 2017-05-31 北京小米移动软件有限公司 智能镜子及其工作控制方法、装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160357578A1 (en) * 2015-06-03 2016-12-08 Samsung Electronics Co., Ltd. Method and device for providing makeup mirror
CN106412458A (zh) * 2015-07-31 2017-02-15 中兴通讯股份有限公司 一种图像处理方法和装置
CN105095917A (zh) * 2015-08-31 2015-11-25 小米科技有限责任公司 图像处理方法、装置及终端
CN105426730A (zh) * 2015-12-28 2016-03-23 小米科技有限责任公司 登录验证处理方法、装置及终端设备
CN105956576A (zh) * 2016-05-18 2016-09-21 广东欧珀移动通信有限公司 一种图像美颜方法、装置及移动终端
CN106161962A (zh) * 2016-08-29 2016-11-23 广东欧珀移动通信有限公司 一种图像处理方法及终端
CN107340856A (zh) * 2017-06-12 2017-11-10 美的集团股份有限公司 控制方法、控制器、智能镜子和计算机可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3462284A4 *

Also Published As

Publication number Publication date
EP3462284A1 (en) 2019-04-03
KR20190022856A (ko) 2019-03-06
CN107820591A (zh) 2018-03-20
JP2019537758A (ja) 2019-12-26
EP3462284A4 (en) 2019-07-17
US20190130652A1 (en) 2019-05-02

Similar Documents

Publication Publication Date Title
WO2018227349A1 (zh) 控制方法、控制器、智能镜子和计算机可读存储介质
KR102241153B1 (ko) 2차원 이미지로부터 3차원 아바타를 생성하는 방법, 장치 및 시스템
US9734628B2 (en) Techniques for processing reconstructed three-dimensional image data
WO2021147920A1 (zh) 一种妆容处理方法、装置、电子设备及存储介质
US20160134840A1 (en) Avatar-Mediated Telepresence Systems with Enhanced Filtering
US20110304629A1 (en) Real-time animation of facial expressions
US9202312B1 (en) Hair simulation method
JP2005038375A (ja) 目の形態分類方法及び形態分類マップ並びに目の化粧方法
CN111968248A (zh) 基于虚拟形象的智能化妆方法、装置、电子设备及存储介质
CN108932654A (zh) 一种虚拟试妆指导方法及装置
CN105069180A (zh) 一种发型设计方法及系统
CN110866139A (zh) 一种化妆处理方法、装置及设备
CN116744820A (zh) 数字彩妆师
WO2022257766A1 (zh) 图像处理方法、装置、设备及介质
KR101719927B1 (ko) 립 모션을 이용한 실시간 메이크업 미러 시뮬레이션 장치
Danieau et al. Automatic generation and stylization of 3d facial rigs
WO2018094506A1 (en) Semi-permanent makeup system and method
KR20230118191A (ko) 디지털 메이크업 아티스트
WO2021155666A1 (zh) 用于生成图像的方法和装置
CN109876457A (zh) 游戏角色生成方法、装置及存储介质
US20230101374A1 (en) Augmented reality cosmetic design filters
US11908098B1 (en) Aligning user representations
US20240221292A1 (en) Light normalization in combined 3d user representations
Rivera et al. Development of an automatic expression recognition system based on facial action coding system
Wood Gaze Estimation with Graphics

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018566586

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017913946

Country of ref document: EP

Effective date: 20181227

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17913946

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20197003099

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE