US20170187988A1 - System and method for image processing - Google Patents
System and method for image processing Download PDFInfo
- Publication number
- US20170187988A1 US20170187988A1 US15/065,988 US201615065988A US2017187988A1 US 20170187988 A1 US20170187988 A1 US 20170187988A1 US 201615065988 A US201615065988 A US 201615065988A US 2017187988 A1 US2017187988 A1 US 2017187988A1
- Authority
- US
- United States
- Prior art keywords
- image
- makeup
- adviser
- advice
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 title claims abstract description 14
- 238000000034 method Methods 0.000 title claims description 70
- 238000003384 imaging method Methods 0.000 claims abstract description 26
- 230000008569 process Effects 0.000 description 43
- 230000006870 function Effects 0.000 description 31
- 238000004891 communication Methods 0.000 description 25
- 238000010586 diagram Methods 0.000 description 11
- 230000004044 response Effects 0.000 description 6
- 238000012790 confirmation Methods 0.000 description 4
- 239000002537 cosmetic Substances 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 239000004973 liquid crystal related substance Substances 0.000 description 4
- 238000013500 data storage Methods 0.000 description 3
- 230000001815 facial effect Effects 0.000 description 3
- 238000010295 mobile communication Methods 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 210000004709 eyebrow Anatomy 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
-
- G06K9/00248—
-
- G06K9/00255—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
- G09B19/0076—Body hygiene; Dressing; Knot tying
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H04N5/23219—
-
- H04N5/23222—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/101—Collaborative creation, e.g. joint development of products or services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
Definitions
- FIG. 9 shows a flow chart of the makeup adviser's advice process.
- FIG. 17 shows one example of the adviser information displayed on the user terminal 100 .
- the adviser terminal 200 has a camera in the camera unit 210 provided with an advice imaging module 211 .
- the advice imaging module 211 converts a taken image into digital data to store in the memory unit 240 .
- the image may be a still image or a moving image. If the image is a moving image, the control unit 250 can capture a part of the moving image to store in the memory unit 240 as a still image.
- the obtained taken image is an accurate image with information as much as the system needs.
- the pixel count and the image quality can be specified.
- the user terminal 100 receives the advice (step S 513 ) and outputs the advice to the output unit 130 by using the advice module 134 .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Educational Technology (AREA)
- Educational Administration (AREA)
- Entrepreneurship & Innovation (AREA)
- Human Computer Interaction (AREA)
- Epidemiology (AREA)
- Public Health (AREA)
- Geometry (AREA)
- Indication In Cameras, And Counting Of Exposures (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention is to provide a system for image processing that are capable of analogizing a composition part of an object at a low price but high performance by using the guide display unit. The user terminal 100 includes an imaging module 111 that takes an image of an object, a guide display module 132 that displays a guide to image a composition part composing the object, and an analogy module 141 that analogizes the composition part. Screen sharing between the user terminal 100 and the adviser terminal 200 enables the makeup adviser to offer the user an advice in real time.
Description
- This application claims priority to Japanese Patent Application No. 2015-250757 filed on Dec. 23, 2015, the entire contents of which are incorporated by reference herein.
- The present invention relates to a system and a method for image processing to analogize parts composing the face of an object.
- Various technologies that support the users' makeup are proposed conventionally. For example, Patent Document 1 describes the display technology to enable users to put on makeup and adjust their clothes while checking their image on a display. Moreover, Patent Document 2 describes the technology to support users' makeup based on three-dimensional information acquired by using three-dimensional instrumentation.
- Patent Document 1: JP 2013-223001A
- Patent Document 2: JP 2015-197710A
- However, the method of Patent Document 1 may not superpose a taken image on an image to be compared well according to the performance of the recognition tool that recognizes a specific portion.
- The method of Patent Document 2 may easily recognize a face compared with Patent Document 1 thank to three-dimensional information but has a problem because an expensive device is required to project a pattern for measurement from a projector.
- Both Patent Documents 1 and 2 enable users to only check their makeup from an image by themselves. However, users may desire to receive advices on detailed procedures and how to handling cosmetics on hand from a specialist when they want to put on makeup in a different way from usual or to learn new makeup.
- In view of the above-mentioned problems, an objective of the present invention is to provide a system and a method for image processing to analogize parts composing the face of an object.
- The first aspect of the present invention provides a system for image processing, including;
- an imaging unit that images an object;
- a guide display unit that displays a guide to image a composition part composing the object; and
- an analogy unit that analogizes the composition part from the image of the object.
- According to the first aspect of the present invention, a system for image processing includes; an imaging unit that images an object; a guide display unit that displays a guide to image a composition part composing the object; and an analogy unit that analogizes the composition part from the image of the object.
- The first aspect of the present invention is the category of a system for image processing, but the category of a method for image processing has similar functions and effects.
- The second aspect of the present invention provides the system according to the first aspect of the present invention, in which the guide display unit indicates any one of a size, a direction, or a position of the object when the object is a person's face.
- According to the second aspect of the present invention, in the system according to the first aspect of the present invention, the guide display unit indicates any one of a size, a direction, or a position of the object when the object is a person's face.
- The third aspect of the present invention provides the system according to the first aspect of the present invention, in which the analogy unit analogizes a type of the composition part by image recognition of the image of the object and adds text information to the analogized composition part.
- According to the third aspect of the present invention, in the system according to the first aspect of the present invention, the analogy unit analogizes a type of the composition part by image recognition of the image of the object and adds text information to the analogized composition part.
- The fourth aspect of the present invention provides the system according to the first aspect of the present invention, in which the analogy unit displays that an object is successfully imaged when analogizing one or more composition parts.
- According to the fourth aspect of the present invention, in the system according to the first aspect of the present invention, the analogy unit displays that an object is successfully imaged when analogizing one or more composition parts.
- The fifth aspect of the present invention provides the system according to the first aspect of the present invention further including a makeup reference image display unit that displays a makeup reference image of the each analogized composition part.
- According to the fifth aspect of the present invention, the system according to the first aspect of the present invention further includes a makeup reference image display unit that displays a makeup reference image of the each analogized composition part.
- The sixth aspect of the present invention provides the system according to the fifth aspect of the present invention further including a makeup reference image application unit that selects an image from the makeup reference image and applies the selected image to the image of the object.
- According to the sixth aspect of the present invention, the system according to the fifth aspect of the present invention further includes a makeup reference image application unit that selects an image from the makeup reference image and applies the selected image to the image of the object.
- The seventh aspect of the present invention provides the system according to the sixth aspect of the present invention further including an advice unit that offers the object advice about how to put on makeup of the selected makeup reference image.
- According to the seventh aspect of the present invention, the system according to the sixth aspect of the present invention further includes an advice unit that offers the object advice about how to put on makeup of the selected makeup reference image.
- The eighth aspect of the present invention provides the system according to the seventh aspect of the present invention, in which the advice unit offers advice in real time through screen sharing between the object and a makeup adviser.
- According to the eighth aspect of the present invention, in the system according to the seventh aspect of the present invention, the advice unit offers advice in real time through screen sharing between the object and a makeup adviser.
- The ninth aspect of the present invention provides a method for image processing, including the steps of;
- imaging an object;
- displaying a guide to image a composition part composing the object; and
- analogizing the composition part from the image of the object.
- The present invention can provide a system and a method for image processing to analogize parts composing the face of an object.
-
FIG. 1 shows a schematic diagram of a preferable embodiment of the present invention. -
FIG. 2 shows a functional block diagram of theuser terminal 100 to show the relationship among the functions. -
FIG. 3 shows a flow chart of the composition part analogy process performed by theuser terminal 100. -
FIG. 4 shows a flow chart of the composition part analogy result display process performed by theuser terminal 100. -
FIG. 5 shows a functional block diagram of theuser terminal 100 with an advice function to show the relationship among the functions. -
FIG. 6 shows a flow chart of the advice process performed by theuser terminal 100. -
FIG. 7 shows a functional block diagram of theuser terminal 100 with a makeup adviser's advice function and theadviser terminal 200 to show the relationship among the functions. -
FIG. 8 shows a functional block diagram of theuser terminal 100, theadviser terminal 200, and theserver 300 to illustrate the relationship among the functions. -
FIG. 9 shows a flow chart of the makeup adviser's advice process. -
FIG. 10 shows a flow chart of the makeup adviser's advice process performed in real time through screen sharing. -
FIG. 11 shows a flow chart of the makeup adviser's advice process performed by email, etc. -
FIG. 12 shows one example of the guide display function of theuser terminal 100. -
FIG. 13 shows one example display when an image is taken by using the guide display function of theuser terminal 100. -
FIG. 14 shows one example of the makeup reference image displayed on theuser terminal 100. -
FIG. 15 shows one example display when the selected makeup reference image is applied to an image that theuser terminal 100 took. -
FIG. 16 shows one example advice displayed on theuser terminal 100. -
FIG. 17 shows one example of the adviser information displayed on theuser terminal 100. -
FIG. 18 shows one example display during screen sharing between theuser terminal 100 and theadviser terminal 200. -
FIG. 19 shows a schematic diagram of the makeup adviser's advice process performed by email, etc., between theuser terminal 100 and theadviser terminal 200. - Embodiments of the present invention will be described below with reference to the attached drawings. However, this is illustrative only, and the technological scope of the present invention is not limited thereto.
- The overview of the present invention will be described below with reference to
FIG. 1 . Theuser terminal 100 includes acamera unit 110, aninput unit 120, anoutput unit 130, amemory unit 140, and acontrol unit 150 as shown inFIG. 2 . Thecamera unit 110 includes animaging module 111. Theoutput unit 130 achieves aguide display module 131 in cooperation with thecontrol unit 150. Thememory unit 140 achieves ananalogy module 141 in cooperation with thecontrol unit 150. -
FIG. 3 shows a flow chart of the composition part analogy process performed by theuser terminal 100. The process shown inFIG. 1 is performed based on this flow chart, which will also be explained below. - In the composition part analogy process, the
user terminal 100 first displays a guide by using theguide display module 131 to support a user to take an image (step S101). -
FIG. 12 shows one example of the guide display function of theuser terminal 100. Themessage 1202 asks a user to fit the face parts to the guide lines displayed on the screen to take a facial image of an object. An example of the guide lines shown inFIG. 12 includes aright eye 1203, aleft eye 1204, anose 1205, amouth 1206, afacial contour 1207, avertical line 1208 passing through the center of the face, and ahorizontal line 1209 passing through the center of the eyes. These guide lines are useful for a user to image the face in an appropriate size, direction, and position and effective to detect a composition part with a high probability in a short time by searching parts near the guide lines in the following composition part analogy process. In this embodiment, the guide lines are simply displayed in straight lines and ovals but may be in free curves and adjustable curves. Moreover, the eyes and the eyebrows may be separately displayed. The guide lines may also be displayed in an outline of each composition part that is obtained by averaging data extracted from a large amount of facial image data.FIG. 12 shows a guide to take a frontal face image. However, another guide may be prepared to take a lateral or an oblique face image. - Then, the
imaging module 111 takes an image of an object (step S102). In this embodiment, the imaging is started by selecting thecamera mark 1201 shown inFIG. 12 , for example. The image may be taken by oneself with the in-camera of a smart phone or a tablet or by other people with a camera (out camera) on the back side. Alternatively, the image in a mirror may be taken by oneself. An image to be taken is superposed on the guide as shown inFIG. 1 so as to take the image in an appropriate size, direction, and position. - Finally, the
analogy module 141 analogizes the composition parts from the taken image of the object (step S103). As an image of an object, a still or moving image may be taken. When a moving image is taken, a still image only has to be captured from the moving image at regular time intervals to analogize the composition parts from the still image. The composition parts herein compose the face, such as a right eye, a left eye, a nose, a mouth, and the entire face. The types and the number of the composition parts may be appropriately set depending on systems. For example, eyes may be separated from eyebrows. - The composition part analogy process may be performed by general image recognition such as pattern matching. Since the image is taken by using the guide, the composition part analogy process can be efficiently performed by searching composition parts near the guide lines. Moreover, the names of the composition parts such as a nose and a mouth analogized by image recognition are associated and stored with image data as text information.
-
FIG. 2 shows a functional block diagram of theuser terminal 100 to show the relationship among the functions. Theuser terminal 100 includes acamera unit 110, aninput unit 120, anoutput unit 130, amemory unit 140, and acontrol unit 150. Thecamera unit 110 includes animaging module 111. Theoutput unit 130 achieves theguide display module 131 in cooperation with thecontrol unit 150. Thememory unit 140 achieves theanalogy module 141 in cooperation with thecontrol unit 150. - The
user terminal 100 may be a general information terminal with which a user can take an image by using the camera, which is an information device or an electrical appliance with functions to be described later. For example, theuser terminal 100 may be a general information appliance such as a mobile phone, a smart phone, a tablet PC, a notebook, a wearable device, or an electronic appliance, which has a camera function or connectivity with an external camera such as a web camera. The smart phone shown as theuser terminal 100 in the attached drawings is just one example. - The
user terminal 100 includes a camera in thecamera unit 110 to achieve theimaging module 111. Theimaging module 111 converts a taken image into digital data to store in thememory unit 140. The image may be a still image or a moving image. If the image is a moving image, thecontrol unit 150 can capture a part of the moving image to store in thememory unit 140 as a still image. The obtained taken image is an accurate image with information as much as the user needs. The pixel count and the image quality can be specified. - The
input unit 120 has a function necessary to instruct to display the above-mentioned guide and take an image. Theinput unit 110 may include a liquid crystal display to achieve a touch panel function, a key board, a mouse, a pen tablet, and a hardware button on the device, and a microphone to perform voice recognition. The features of the present invention are not limited in particular by an input method. - The
output unit 130 achieves theguide display module 131 in cooperation with thecontrol unit 150. Theoutput unit 130 has a function necessary to display the guide and an image to be taken. Theoutput unit 130 may take forms such as a liquid crystal display, a PC display, and a projector. The features of the present invention are not limited in particular by an output method. - The
memory unit 140 includes a data storage unit such as a hard disk or a semiconductor memory to store taken moving and still images, data necessary for the analogy process, image data and text information on analogized composition parts, etc. Thememory unit 140 achieves theanalogy module 141 in cooperation with thecontrol unit 150. - The
control unit 150 includes a central processing unit (hereinafter referred to as “CPU”), a random access memory (hereinafter referred to as “RAM”), and a read only memory (hereinafter referred to as “ROM”). -
FIG. 4 shows a flow chart of the composition part analogy result display process performed by theuser terminal 100. The composition part analogy result is displayed in addition to the above-mentioned composition part analogy process. The difference from the flow chart ofFIG. 3 will be explained below. - The guide display (step S201), the imaging (step S202), and the composition part analogy (step S203) of
FIG. 4 are the same as the guide display (step S101), the imaging (step S102), and the composition part analogy (step S103) ofFIG. 3 . - If one or more composition parts are detected by the composition part analogy in the step S203, the image is recognized to be taken according to the guide to display an analogy result of the composition parts on the
output unit 130 of the user terminal 100 (step S204). -
FIG. 13 shows one example display when an image is taken by using the guide display function of theuser terminal 100. If any or all the composition parts, a right eye, a left eye, a nose, a mouth, and the entire face are found by the composition part analogy (step S203), the image is recognized to be taken according to the guide to display themassage 1301 that the imaging has been successful. In this case, the taken image may be subjected to the following process so that aconfirmation button 1302 and acancellation button 1303 may be displayed together. If theconfirmation button 1302 is selected, the currently displayed image is subjected to the following makeup reference image application process, etc. If thecancellation button 1303 is selected, an image will be taken again. -
FIG. 5 shows a functional block diagram of theuser terminal 100 with an advice function to show the relationship among the functions. In addition to the functions ofFIG. 2 , theoutput unit 130 achieves a makeup referenceimage display module 132, a makeup referenceimage application module 133, and anadvice module 134 in cooperation with thecontrol unit 150. -
FIG. 6 shows a flow chart of the advice process performed by theuser terminal 100. The guide display (step S301), the imaging (step S302), the composition part analogy (step S303), and the analogy result display (step S304) ofFIG. 6 are the same as the guide display (step S201), the imaging (step S202), the composition part analogy (step S203), and the analogy result display (step S204) ofFIG. 4 . - If the advice process is performed, the makeup reference
image display module 132 displays the makeup reference image (step S305) after the analogy result is displayed in the step S304. -
FIG. 14 shows one example of the makeup reference image displayed on theuser terminal 100. A list of makeup reference images are displayed for each analogized composition part. InFIG. 14 , themessage 1401 is that referable sample makeup images of each composition part analogized based on the recognition result are displayed. In this embodiment, the entire face, a mouth, a right eye, a left eye, a nose, and a cheek are recognized and displayed. The number of referable sample makeup images of each composition part is displayed, and the representative sample makeup images are thumbnailed. InFIG. 14 , a composition part for which the user wishes to know the detailed makeup method is selected. - If the left eye is selected in
FIG. 14 , the screen as shown inFIG. 15 is displayed.FIG. 15 shows one example display when the selected makeup reference image is applied to an image that theuser terminal 100 took. Thearea 1501 inFIG. 15 displays a list of makeup reference images of the left eye selected inFIG. 14 . At this time, text information on the composition part is added to the makeup reference images help the process perform at high speed. Thearea 1501 displays only five makeup reference images but can display all the makeup reference images by scrolling the images right and left. InFIG. 15 , thearea 1501 displays only makeup reference images of the left eye but may display makeup reference images of all the composition parts depending on the system. Moreover, makeup reference images may not be displayed for each analogized composition part and may be for each makeup adviser. - Then, one of the makeup reference images displayed in the
area 1501 is selected (step S306), and the makeup referenceimage application module 133 applies the makeup of the selected makeup reference image to the taken image (step S307).FIG. 15 shows an example case in which a selectedmakeup reference image 1503 is applied to theleft eye area 1502. At this point, themessage 1504 that the selected makeup has been applied to the taken image is displayed. If satisfied with the selected makeup reference image, the user selects theconfirmation button 1505 to proceed to the following advice process. If wishing to select another makeup reference image, the user selects thecancellation button 1506 to select a makeup reference image again. - After the
confirmation button 1505 is selected, theadvice module 134 outputs an advice on the selected makeup reference image (step S308). -
FIG. 16 shows one example advice displayed on theuser terminal 100. Thedisplay area 1601 displays the taken image to which a makeup reference image is applied. Themessage 1602 is an easy explanation of the makeup method. If theexecution button 1603 is selected, the makeup method is displayed in detail step by step with images and messages. If the demonstrationimage reproduction button 1606 is selected, the makeup method is displayed with a video image. If thecancellation button 1604 is selected, the makeup reference image display of the step S305 or the guide display of the step S301 is repeated. Moreover, thearea 1605 displays information on cosmetics used based on the makeup reference image. The cosmetics information may link to an online shopping website, etc., on the Internet. -
FIG. 7 shows a functional block diagram of theuser terminal 100 with a makeup adviser's advice function and theadviser terminal 200 to show the relationship among the functions. - The
user terminal 100 includes acommunication unit 160 in addition to the above-mentioned units shown inFIG. 5 . Theuser terminal 100 can transmit images and voices of the display on theoutput unit 130 through thecommunication unit 160. Theuser terminal 100 can also transmit the stored still and moving images, input character strings and voice messages, etc., through E-mail, a short message service (hereinafter referred to as “SMS”), a chat service, a social networking service (hereinafter referred to as “SNS”), etc. - The
communication unit 160 includes a Wireless Fidelity (Wi-Fi®) enabled device complying with, for example, IEEE 802.11, or a wireless device complying with the IMT-2000 standard such as the third generation mobile communication system. The communication unit may include a wired device for LAN connection. - For the makeup adviser's advice process, the
input unit 120 of theuser terminal 100 preferably also has a microphone function, etc., to take place voice interaction with theadviser terminal 200. - The
adviser terminal 200 that a makeup adviser uses includes acamera unit 210, aninput unit 220, anoutput unit 230, amemory unit 240, acontrol unit 250, and a communication unit 260. Thecamera unit 210 includes anadvice imaging module 211. Theoutput unit 230 achieves a user terminal image display module 231 in cooperation with thecontrol unit 250. The communication unit 260 achieves anadvice transmitting module 261 in cooperation with thecontrol unit 250. - The
adviser terminal 200 may be a general information terminal with which a makeup adviser can take an image by using the camera in the same way as theuser terminal 100, which is an information device or an electrical appliance with functions to be described later. For example, theadviser terminal 200 may be a general information appliance such as a mobile phone, a smart phone, a tablet PC, a notebook, a wearable device, or an electronic appliance which has a camera function or connectivity with an external camera such as a web camera. The smart phone shown as theadviser terminal 200 in attached drawings is just one example. - The
adviser terminal 200 has a camera in thecamera unit 210 provided with anadvice imaging module 211. Theadvice imaging module 211 converts a taken image into digital data to store in thememory unit 240. The image may be a still image or a moving image. If the image is a moving image, thecontrol unit 250 can capture a part of the moving image to store in thememory unit 240 as a still image. The obtained taken image is an accurate image with information as much as the system needs. The pixel count and the image quality can be specified. - The
input unit 220 has an input function necessary to advise the user. Theinput unit 220 may include a liquid crystal display to achieve a touch panel function, a key board, a mouse, a pen tablet, and a hardware button on the device. Theinput unit 220 preferably also has a microphone function, etc., to take place voice interaction with theuser terminal 100. The features of the present invention are not limited in particular by an input method. - The
output unit 230 achieves a user terminal image display module 231 in cooperation with thecontrol unit 250. The user terminal image display module 231 displays some or all the images being output to theoutput unit 130 of theuser terminal 100 on theoutput unit 230 of theadviser terminal 200. Theoutput unit 230 may take forms such as a liquid crystal display, a PC display, and a projector. The features of the present invention are not limited in particular by an output method. - The
memory unit 240 includes a data storage unit such as a hard disk or a semiconductor memory to store necessary information such as taken moving and still images, moving or still images received from theuser terminal 100, and voices. - The
control unit 250 includes a CPU, a RAM, and a ROM. - The communication unit 260 includes a Wi-Fi® enabled device complying with, for example, IEEE 802.11, or a wireless device complying with the IMT-2000 standard such as the third generation mobile communication system. The communication unit may include a wired device for LAN connection. The communication unit 260 achieves the
advice transmitting module 261 in cooperation with thecontrol unit 250. Theadvice transmitting module 261 can transmit images and voices of the display on theoutput unit 230. Theadvice transmitting module 261 can also transmit the stored still and moving images, input character strings and voice messages, etc., through E-mail, SMS, a chat service, SNS, etc. -
FIG. 8 shows a functional block diagram of theuser terminal 100, theadviser terminal 200, and theserver 300 to illustrate the relationship among the functions. Theuser terminal 100, theadviser terminal 200, and theserver 300 are connected through acommunication network 500. Thecommunication network 500 may be a public or private line network. Theuser terminal 100 may be directly connected with theadviser terminal 200 through peer to peer communication as appropriate. - The
server 300 may be a general server provided with the functions to be described later, which includes a communication unit 310, acontrol unit 320, and amemory unit 330. - The communication unit 310 includes a wired device for LAN connection, a Wi-Fi® enabled device complying with, for example, IEEE 802.11, or a wireless device complying with the IMT-2000 standard such as the third generation mobile communication system. The communication unit 310 achieves an adviser's
status check module 311, an adviserinformation transmitting module 312, and a userrequest transmitting module 313 in cooperation with thecontrol unit 320 and thememory unit 330. - The
control unit 320 includes a CPU, a RAM, and a ROM. Thecontrol unit 320 may include ananalogy module 321 in cooperation with thememory unit 330. Theanalogy module 321 analogizes a composition part in the same way as theanalogy module 141 of theuser terminal 100. To perform the analogy process in theserver 300, the image of an object that theuser terminal 100 took is transmitted to theserver 300 through thecommunication unit 160 of theuser terminal 100. Theserver 300 receives an image from the communication unit 310, stores the received image in thememory unit 330, and performs the composition part analogy process for the image. The result of the composition part analogy process is transmitted to theuser terminal 100 through the communication unit 310. The analogy result is displayed on theoutput unit 130 of theuser terminal 100. Advantages of performing the composition part analogy process in theserver 300 are that a large amount of data necessary for pattern matching can be accumulated in thememory unit 330 and that the latest process can be applied by updating only theanalogy module 321 of theserver 300 when the composition part analogy process is updated. - The
memory unit 330 includes a data storage unit such as a hard disk or a semiconductor memory. Thememory unit 330 has a user information data base 331 and an adviser information data base 332 to store necessary information for the system. -
FIG. 9 shows a flow chart of the makeup adviser's advice process. The guide display (step S401), the imaging (step S402), the composition part analogy (step S403), the analogy result display (step S404), the makeup reference image display (step S405), the makeup reference image selection (step S406), and the makeup application to a taken image (step S407) ofFIG. 9 are the same as the guide display (step S301), the imaging (step S302), the composition part analogy (step S303), the analogy result display (step S304), the makeup reference image display (step S305), the makeup reference image selection (step S306), and the makeup application to a taken image (step S307) ofFIG. 6 . - In
FIG. 9 , the makeup adviser's advice process is performed (step S408) instead of or after the advice process ofFIG. 6 . The makeup adviser's advice process may be performed in real time through screen sharing or may be performed by email, etc. These cases will be explained below. -
FIG. 10 shows a flow chart of the makeup adviser's advice process of the step S408 performed in real time through screen sharing. - First, the
user terminal 100 requests adviser information from theserver 300 to receive an advice from an adviser (step S501). This is to acquire adviser information if two or more advisers exist. - The
server 300 checks the status of advisers registered in the adviser information data base 332 by using the adviser'sstatus check module 311 in response to the request (step S502). The status of all the advisers registered in the adviser information data base 332 may be checked. Alternatively, a weekday and a time at which an adviser can respond may be previously registered in the adviser information data base. The status of an adviser who can respond at that time may be checked. - The
adviser terminal 200 that has received the check from theserver 300 judges whether or not the makeup adviser can respond (step S503). This judgment may be performed based on some response that the makeup adviser inputs to a question from theadviser terminal 200 or based on the adviser's logged-in status in which off-line, away from computer, busy, offline, etc., is previously set in the same way as application software offering a chat service, etc. - As a result of the judgement, if the adviser can or cannot surely make a response, the
adviser terminal 200 transmits the adviser's status to the server 300 (step S504). However, if the adviser's status is uncertain because for example, the makeup adviser does not input any response to the question from theadviser terminal 200, the status is checked at a regular interval until a response is received. - The
server 300 receives the adviser's status from the adviser terminal 200 (step S505). - The adviser
information transmitting module 312 generates adviser information based on the received data and data in the adviser information data base 332 and transmits the generated adviser information to the user terminal 100 (step S506). The adviser information includes the name, profile, face photo, and online or off-line status of the adviser, makeup reference images that the adviser worked on, which are necessary for the user to select an adviser from theuser terminal 100. Moreover, the number of times that the user of theuser terminal 100 received an advice from each adviser in the past may be added to the adviser information by looking up the user information data base 331, etc. - The
user terminal 100 displays adviser information on theoutput unit 130 based on the received adviser information (step S507). -
FIG. 17 shows one example of the adviser information displayed on theuser terminal 100. In this example, the name, profile, face photo, and online or offline status of the adviser, makeup reference images that the adviser worked on are displayed. The order of displaying advisers can be appropriately set by the system and the user by putting a priority on the adviser who worked on the makeup reference image selected this time, the adviser in the online status, the popularity ranking, the number of times that the user received an advice in the past, etc. InFIG. 17 , themessage 1701 questions which makeup adviser the user asks for advice. The user selects a makeup adviser in reference to theprofile 1702 of the adviser A, theprofile 1703 of the adviser B, theprofile 1704 of the adviser C, etc., and transmits an advice request to the server 300 (step S508). In this example, the user selects the adviser A. - The
server 300 receives this advice request and transmits an user request to theadviser terminal 200 of the adviser A by using the user request transmitting module 313 (step S509). The request data from a user includes a real-time advice request through screen sharing and information on the makeup reference image selected this time. Moreover, the name, the gender, the age, the favorite makeup reference images, etc. of the user may be previously registered in the user information data base 331 and may be transmitted together with an advice request. - The
adviser terminal 200 checks whether or not to respond to the real-time advice request from the user (step S510). - If responding to the user request, the
adviser terminal 200 establishes communication with theuser terminal 100 and performs screen sharing (step S511). The method of establishing the communication may be used for communication in a typical public or private line network or for direct peer to peer communication. - If not responding to the user request within a certain time or if rejecting the user request, the
adviser terminal 200 returns the process to the adviser situation transmitting of the step S504 and notifies theuser terminal 100 that the selected adviser cannot respond to the real-time advice request, through theserver 300. Theserver 300 updates and transmits adviser information to theuser terminal 100 to allow the user to select an adviser again. -
FIG. 18 shows one example display during screen sharing between theuser terminal 100 and theadviser terminal 200. The adviser who uses theadviser terminal 200 represents “1802,” and the user who uses theuser terminal 100 represents “1801.” The user terminal image display module 231 displays some or all the images being output to theoutput unit 130 of theuser terminal 100 on theoutput unit 230 of theadviser terminal 200. Theadviser terminal 200 takes an advice image by using the adviceimage imaging module 211 of theadviser terminal 200 and transmits an advice by using the advice transmitting module 261 (step S512). The advice includes methods of selecting and using cosmetics and a specific makeup procedure based on the image that theuser 1801 took and the makeup reference image that theuser 1801 selected. The advice includes a still image, a moving image, a voice, and a reference URL as data. - The
user terminal 100 receives the advice (step S513) and outputs the advice to theoutput unit 130 by using theadvice module 134. - The image taken by the
imaging module 111 and the image taken by the adviceimage imaging module 211 are output on theuser terminal 100 and theadviser terminal 200 together with voices during screen sharing. This enables theuser 1801 to put on makeup according to the received advice. The screen sharing is ended from any one of theuser terminal 100, theadviser terminal 200, and theserver 300. The one which ends screen sharing only has to notify the other two. The screen sharing method is not limited to the scope of the present invention. Any existing technologies are applicable. -
FIG. 11 shows a flow chart of the makeup adviser's advice process of the step S408 performed by E-mail, etc. - The process from the step S601 to S607 in
FIG. 11 is the same as that from the step S501 to S507 inFIG. 10 . -
FIG. 19 shows a schematic diagram of the makeup adviser's advice process performed by email, etc., between theuser terminal 100 and theadviser terminal 200. The user who uses theuser terminal 100 represents “1901,” and the adviser who uses theadviser terminal 200 represents “1902.” - If the
user 1901 selects the offline adviser based on adviser information or selecting not a real time advice through screen sharing but an advice by E-mail, etc., theuser terminal 100 transmits advice request mail 1903 (step S608). Theadvice request mail 1903 may contain the stored still images and moving images, input character strings, voice messages, etc. The advice by E-mail is held up as an example of the advice not through screen sharing. However, the advice does not have to be offered by E-mail. The other communication services such as SMS, a chat service, and SNS are applicable in the same way as E-mail. - The
adviser terminal 200 receives theadvice request mail 1903 from the user (step S609) and outputs the request to theoutput unit 230. - The
adviser terminal 200 takes an advice image in response to the request mail by using the adviceimage imaging module 211 or input an advice message, etc., by using theinput unit 220. Theadviser terminal 200 attaches still images, moving images, input character strings, voice messages, etc. toadvice mail 1904 to transmit (step S610). - The
user terminal 100 receives the advice mail 1904 (step S611) and outputs theadvice mail 1904 to theoutput unit 130 by using theadvice module 134. - The advice request mail and the advice mail can be exchanged two or more times as required between the
user terminal 100 and theadviser terminal 200. - To achieve the means and the functions that are described above, a computer (including a CPU, an information processor, and various terminals) reads and executes a predetermined program. For example, the program is provided in the form recorded in a computer-readable medium such as a flexible disk, CD (e.g., CD-ROM), DVD (e.g., DVD-ROM, DVD-RAM), or a compact memory. In this case, a computer reads a program from the record medium, forwards and stores the program to and in an internal or an external storage, and executes it. The program may be previously recorded in, for example, a storage (record medium) such as a magnetic disk, an optical disk, or a magnetic optical disk and provided from the storage to a computer through a communication line.
- The embodiments of the present invention are described above. However, the present invention is not limited to the above-mentioned embodiments. The effect described in the embodiments of the present invention is only the most preferable effect produced from the present invention. The effects of the present invention are not limited to that described in the embodiments of the present invention.
-
-
- 100 User terminal
- 200 Adviser terminal
- 300 Server
- 500 Communication network
Claims (14)
1. A system for image processing, comprising:
a guide display unit that displays a guide to image a plurality of composition parts composing a person's face, the guide including a plurality of guide lines that correspond to the plurality of composition parts, respectively;
an imaging unit that images the face after the plurality of composition parts are fitted to the plurality of guide lines;
an analogy unit that analogizes the composition parts from the image of the face based on the guide lines;
a makeup reference image display unit that displays a list of makeup reference images for each analogized composition part; and
a makeup reference image application unit that selects a makeup reference image from among the list of makeup reference images for each analogized composition part and applies the selected makeup reference image to a corresponding composition part in the image of the face.
2. The system according to claim 1 , wherein the guide display unit indicates any one of a size, a direction, or a position of the face by the guide lines.
3. The system according to claim 1 , wherein the analogy unit analogizes a type of each of the composition parts by image recognition of the image of the face and adds text information to the analogized composition parts.
4. The system according to claim 1 , wherein the analogy unit displays that the face is successfully imaged when analogizing one or more of the composition parts.
5-6. (canceled)
7. The system according to claim 1 , further comprising an advice unit that offers advice about how to put on makeup of the selected makeup reference image.
8. The system according to claim 7 , wherein the advice unit offers advice in real time through screen sharing between the object and a makeup adviser.
9. A method for image processing, comprising:
displaying a guide to image a plurality of composition parts composing a person's face, the guide including a plurality of guide lines that correspond to the plurality of composition parts, respectively;
imaging the face after the plurality of composition parts are fitted to the plurality of guide lines;
analogizing the composition parts from the image of the face based on the guide lines;
displaying a list of makeup reference images for each of the analogized composition parts; and
selecting a makeup reference image from among the list of makeup reference image for each analogized composition part and applying the selected makeup reference image to a corresponding composition part in the image of the face.
10. The method according to claim 9 , wherein the guide indicates any one of a size, a direction, or a position of the face by the guide lines.
11. The method according to claim 9 , wherein analogizing the composition parts comprises analogizing a type of each of the composition parts by image recognition of the image of the face and adds text information to the analogized composition parts.
12. The method according to claim 9 , further comprising displaying that the face is successfully imaged when analogizing one or more of the composition parts.
13-14. (canceled)
15. The method according to claim 9 , further comprising offering advice about how to put on makeup of the selected makeup reference image.
16. The method according to claim 15 , wherein offering the advice comprises offering the advice in real time through screen sharing between the object and a makeup adviser.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015250757A JP6200483B2 (en) | 2015-12-23 | 2015-12-23 | Image processing system, image processing method, and image processing program |
JP2015-250757 | 2015-12-23 |
Publications (2)
Publication Number | Publication Date |
---|---|
US9674485B1 US9674485B1 (en) | 2017-06-06 |
US20170187988A1 true US20170187988A1 (en) | 2017-06-29 |
Family
ID=58776518
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/065,988 Active US9674485B1 (en) | 2015-12-23 | 2016-03-10 | System and method for image processing |
Country Status (2)
Country | Link |
---|---|
US (1) | US9674485B1 (en) |
JP (1) | JP6200483B2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3764633A1 (en) * | 2019-07-12 | 2021-01-13 | Beijing Xiaomi Mobile Software Co., Ltd. | Photographing method, device, storage medium, and computer program |
WO2022001882A1 (en) * | 2020-06-28 | 2022-01-06 | 华为技术有限公司 | Information prompting method and terminal device |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6212533B2 (en) * | 2015-12-28 | 2017-10-11 | 株式会社オプティム | Screen sharing system, screen sharing method, and screen sharing program |
WO2018109796A1 (en) * | 2016-12-12 | 2018-06-21 | 株式会社オプティム | Remote control system, remote control method, and program |
JP7200139B2 (en) | 2017-07-13 | 2023-01-06 | 株式会社 資生堂 | Virtual face makeup removal, fast face detection and landmark tracking |
CN109288233A (en) * | 2017-07-25 | 2019-02-01 | 丽宝大数据股份有限公司 | It is signable to repair the biological information analytical equipment for holding region |
CN108256432A (en) * | 2017-12-20 | 2018-07-06 | 歌尔股份有限公司 | A kind of method and device for instructing makeup |
CN110119868B (en) * | 2018-02-06 | 2023-05-16 | 英属开曼群岛商玩美股份有限公司 | System and method for generating and analyzing user behavior indexes in makeup consultation conference |
US10691932B2 (en) | 2018-02-06 | 2020-06-23 | Perfect Corp. | Systems and methods for generating and analyzing user behavior metrics during makeup consultation sessions |
CN110149301A (en) * | 2018-02-13 | 2019-08-20 | 英属开曼群岛商玩美股份有限公司 | System and method for the color make-up advisory meeting based on event |
CN110570224A (en) * | 2018-06-06 | 2019-12-13 | 英属开曼群岛商玩美股份有限公司 | System, method and storage medium for execution on computing device |
CN110149475B (en) * | 2018-06-21 | 2021-09-21 | 腾讯科技(深圳)有限公司 | Image shooting method and device, electronic device, storage medium and computer equipment |
US10863812B2 (en) * | 2018-07-18 | 2020-12-15 | L'oreal | Makeup compact with eye tracking for guidance of makeup application |
CN110930173B (en) * | 2018-09-19 | 2022-11-25 | 玩美移动股份有限公司 | System and method for execution on computing device |
US11257142B2 (en) | 2018-09-19 | 2022-02-22 | Perfect Mobile Corp. | Systems and methods for virtual application of cosmetic products based on facial identification and corresponding makeup information |
WO2020090458A1 (en) * | 2018-10-29 | 2020-05-07 | ソニー株式会社 | Display device and display control method |
JP2020088557A (en) * | 2018-11-22 | 2020-06-04 | パナソニックIpマネジメント株式会社 | Skin analyzer |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150366328A1 (en) * | 2013-02-01 | 2015-12-24 | Panasonic Intellectual Property Management Co., Ltd. | Makeup assistance device, makeup assistance system, makeup assistance method, and makeup assistance program |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8107672B2 (en) * | 2006-01-17 | 2012-01-31 | Shiseido Company, Ltd. | Makeup simulation system, makeup simulator, makeup simulation method, and makeup simulation program |
US9405995B2 (en) * | 2008-07-14 | 2016-08-02 | Lockheed Martin Corporation | Method and apparatus for facial identification |
JP5738569B2 (en) * | 2010-10-15 | 2015-06-24 | 任天堂株式会社 | Image processing program, apparatus, system and method |
US9013489B2 (en) * | 2011-06-06 | 2015-04-21 | Microsoft Technology Licensing, Llc | Generation of avatar reflecting player appearance |
FR2985064B1 (en) * | 2011-12-23 | 2016-02-26 | Oreal | METHOD FOR DELIVERING COSMETIC ADVICE |
JP5796523B2 (en) * | 2012-03-27 | 2015-10-21 | 富士通株式会社 | Biological information acquisition apparatus, biological information acquisition method, and biological information acquisition control program |
JP6111528B2 (en) | 2012-04-13 | 2017-04-12 | カシオ計算機株式会社 | Image processing apparatus, image processing method, and program |
US9224248B2 (en) * | 2012-07-12 | 2015-12-29 | Ulsee Inc. | Method of virtual makeup achieved by facial tracking |
GB2505239A (en) * | 2012-08-24 | 2014-02-26 | Vodafone Ip Licensing Ltd | A method of authenticating a user using different illumination conditions |
US20140146190A1 (en) * | 2012-11-28 | 2014-05-29 | Fatemeh Mohammadi | Method And System For Automated Or Manual Evaluation To provide Targeted And Individualized Delivery Of Cosmetic Actives In A Mask Or Patch Form |
JP6132232B2 (en) * | 2013-02-01 | 2017-05-24 | パナソニックIpマネジメント株式会社 | Makeup support device, makeup support system, and makeup support method |
JP6128309B2 (en) * | 2013-02-01 | 2017-05-17 | パナソニックIpマネジメント株式会社 | Makeup support device, makeup support method, and makeup support program |
JP6097679B2 (en) * | 2013-02-28 | 2017-03-15 | エルジー アプラス コーポレーション | Inter-terminal function sharing method and terminal |
JP5372276B1 (en) * | 2013-03-22 | 2013-12-18 | パナソニック株式会社 | Makeup support device, makeup support method, and makeup support program |
US20150058129A1 (en) * | 2013-08-23 | 2015-02-26 | Marshall Feature Recognition Llc | System and method for electronic interaction with merchandising venues |
WO2015029372A1 (en) * | 2013-08-30 | 2015-03-05 | パナソニックIpマネジメント株式会社 | Makeup assistance device, makeup assistance system, makeup assistance method, and makeup assistance program |
JP2015197710A (en) | 2014-03-31 | 2015-11-09 | 株式会社メガチップス | Makeup support device, and program |
WO2015161009A1 (en) * | 2014-04-16 | 2015-10-22 | The Procter & Gamble Company | Device and method for applying a cosmetic composition |
US20150356661A1 (en) * | 2014-06-09 | 2015-12-10 | Jillianne Rousay | Cosmetic matching and recommendations |
-
2015
- 2015-12-23 JP JP2015250757A patent/JP6200483B2/en active Active
-
2016
- 2016-03-10 US US15/065,988 patent/US9674485B1/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150366328A1 (en) * | 2013-02-01 | 2015-12-24 | Panasonic Intellectual Property Management Co., Ltd. | Makeup assistance device, makeup assistance system, makeup assistance method, and makeup assistance program |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3764633A1 (en) * | 2019-07-12 | 2021-01-13 | Beijing Xiaomi Mobile Software Co., Ltd. | Photographing method, device, storage medium, and computer program |
WO2022001882A1 (en) * | 2020-06-28 | 2022-01-06 | 华为技术有限公司 | Information prompting method and terminal device |
Also Published As
Publication number | Publication date |
---|---|
JP2017118282A (en) | 2017-06-29 |
US9674485B1 (en) | 2017-06-06 |
JP6200483B2 (en) | 2017-09-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9674485B1 (en) | System and method for image processing | |
US11973732B2 (en) | Messaging system with avatar generation | |
US11783862B2 (en) | Routing messages by message parameter | |
US10872112B2 (en) | Automatic suggestions to share images | |
US20170185365A1 (en) | System and method for screen sharing | |
WO2018177002A1 (en) | Social information display method, computer device and storage medium | |
KR20170095817A (en) | Avatar selection mechanism | |
CN104243276B (en) | A kind of contact person recommends method and device | |
Wouters et al. | Biometric mirror: Exploring ethical opinions towards facial analysis and automated decision-making | |
US20160277707A1 (en) | Message transmission system, message transmission method, and program for wearable terminal | |
US20160055378A1 (en) | Real-time analytics to identify visual objects of interest | |
US20220036427A1 (en) | Method for managing immersion level and electronic device supporting same | |
CN110569726A (en) | interaction method and system for service robot | |
JP2019212039A (en) | Information processing device, information processing method, program, and information processing system | |
JP7206741B2 (en) | HEALTH CONDITION DETERMINATION SYSTEM, HEALTH CONDITION DETERMINATION DEVICE, SERVER, HEALTH CONDITION DETERMINATION METHOD, AND PROGRAM | |
JP7545239B2 (en) | COMMUNICATION SYSTEM AND COMMUNICATION METHOD | |
CN111885343B (en) | Feature processing method and device, electronic equipment and readable storage medium | |
CN114816599A (en) | Image display method, apparatus, device and medium | |
WO2018166241A1 (en) | Method and device for generating presentation content | |
WO2023166562A1 (en) | Childcare state estimation device, childcare state estimation method, and recording medium | |
JP7398853B1 (en) | Video viewing analysis system, video viewing analysis method, and video viewing analysis program | |
JPWO2018216213A1 (en) | Computer system, pavilion content changing method and program | |
JP2024025511A (en) | Communication device, communication method, and program | |
JP2021064062A (en) | System, control method thereof, and program | |
WO2014147813A1 (en) | Information processing system, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: OPTIM CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SUGAYA, SHUNJI;REEL/FRAME:038406/0754 Effective date: 20160420 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |