WO2014171142A1 - 画像処理方法および画像処理装置 - Google Patents
画像処理方法および画像処理装置 Download PDFInfo
- Publication number
- WO2014171142A1 WO2014171142A1 PCT/JP2014/002170 JP2014002170W WO2014171142A1 WO 2014171142 A1 WO2014171142 A1 WO 2014171142A1 JP 2014002170 W JP2014002170 W JP 2014002170W WO 2014171142 A1 WO2014171142 A1 WO 2014171142A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- image processing
- feature point
- mode
- frame
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 157
- 238000003672 processing method Methods 0.000 title claims description 49
- 238000000034 method Methods 0.000 claims description 83
- 238000012790 confirmation Methods 0.000 claims description 71
- 230000008569 process Effects 0.000 claims description 68
- 238000001514 detection method Methods 0.000 claims description 40
- 238000010422 painting Methods 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 8
- 230000000717 retained effect Effects 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 6
- 238000012795 verification Methods 0.000 abstract 2
- 238000004088 simulation Methods 0.000 description 38
- 239000011159 matrix material Substances 0.000 description 12
- 230000009466 transformation Effects 0.000 description 12
- 238000004590 computer program Methods 0.000 description 11
- 238000006243 chemical reaction Methods 0.000 description 10
- 210000004709 eyebrow Anatomy 0.000 description 10
- 238000003384 imaging method Methods 0.000 description 8
- 239000002537 cosmetic Substances 0.000 description 7
- 230000001815 facial effect Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 239000013589 supplement Substances 0.000 description 5
- 241001237961 Amanita rubescens Species 0.000 description 4
- 235000002673 Dioscorea communis Nutrition 0.000 description 4
- 241000544230 Dioscorea communis Species 0.000 description 4
- 208000035753 Periorbital contusion Diseases 0.000 description 4
- 239000000470 constituent Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 101700012268 Holin Proteins 0.000 description 2
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000000720 eyelash Anatomy 0.000 description 2
- 210000000744 eyelid Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 101710086578 Chaperone protein gp12 Proteins 0.000 description 1
- 101710124413 Portal protein Proteins 0.000 description 1
- 101710102575 Pre-neck appendage protein Proteins 0.000 description 1
- 101710099276 Probable metalloendopeptidase Proteins 0.000 description 1
- 101710112672 Probable tape measure protein Proteins 0.000 description 1
- 101710159453 Proximal tail tube connector protein Proteins 0.000 description 1
- 101710199973 Tail tube protein Proteins 0.000 description 1
- 240000002871 Tectona grandis Species 0.000 description 1
- 101710194975 Uncharacterized protein gp14 Proteins 0.000 description 1
- 210000000617 arm Anatomy 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 210000000038 chest Anatomy 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000004209 hair Anatomy 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 210000001624 hip Anatomy 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 239000003973 paint Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 210000002832 shoulder Anatomy 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A45—HAND OR TRAVELLING ARTICLES
- A45D—HAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
- A45D44/00—Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
- A45D44/005—Other cosmetic or toiletry articles, e.g. for hairdressers' rooms for selecting or displaying personal cosmetic colours or hairstyle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/0416—Control or interface arrangements specially adapted for digitisers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
- H04N2005/2726—Means for inserting a foreground image in a background image, i.e. inlay, outlay for simulating a person's appearance, e.g. hair style, glasses, clothes
Definitions
- the present invention relates to an image processing method for recognizing an image and performing image processing, and an apparatus therefor.
- a makeup simulation technique for realizing a virtual post-makeup face by performing makeup processing on a face image by image processing on a computer.
- Makeup simulation uses facial recognition technology to extract feature points such as mouth, eyes, nose, and face outline, and based on the feature points, each makeup item such as lipstick is image processed. It is realized by combining.
- a simulation result is displayed in real time on a user's face image taken as a video, it is possible to provide a user with a highly realistic make-up simulation as if he / she was actually applying makeup in front of a mirror (for example, Patent Document 1 and Patent Document 2).
- a system in which a monitor that can be visually recognized by the user and a camera for photographing the user's face are installed on the monitor.
- the present invention solves the above-described conventional problems, and an object thereof is to provide an image processing method for appropriately selecting a processed image during screen operation.
- an image processing method of the present invention is an image processing method in a system that performs image processing on an input moving image and displays a result of the image processing.
- the operation mode setting includes a determination process as to whether the operation mode setting is an operation mode for displaying a still image or a confirmation mode for displaying a moving image, and a process for each mode according to a result of the determination process.
- a frame image in which an object of image processing appropriately appears is selected from a plurality of frame images constituting the moving image and displayed as a still image, and an operation related to image processing on the object is accepted.
- the mode-specific processing in the confirmation mode includes processing for performing processing on each frame of the moving image and displaying it for user confirmation. Hints, the image processing, in the object according to the operation received in the operation mode, characterized in that it is performed in the object appearing in each frame of the moving image.
- the image processing method of the present invention it is possible to appropriately select a still image in which an object of image processing appears appropriately as a processed image during screen operation.
- FIG. 3 is a block diagram of a makeup simulator in the first embodiment of the present invention.
- 1 is an external view of a makeup simulator according to Embodiment 1 of the present invention.
- the flowchart which shows the operation mode setting procedure of the makeup simulator in Embodiment 1 of this invention.
- the flowchart which shows the operation
- the flowchart which shows a face feature point detection procedure.
- the flowchart which shows the recalculation procedure of the average face feature point position in Embodiment 2 of this invention The figure which shows the phase according to mode switching.
- (A) shows the feature point group detected in the frame image Fx and the frame image Fx + m.
- (B) shows a transformation matrix defining the transformation of feature points between the frame image Fx and the frame image Fx + m.
- (A) is a figure which shows the some feature point which exists in the frame image Fx and the frame image Fx + m.
- (B) is an example of a transformation matrix.
- (C) is a figure which shows the condition where the hand-painted image is drawn with respect to a still image. The figure which shows the circumstances until the still image used as the base of confirmation mode is decided when operation mode is switched from confirmation mode to operation mode.
- the flowchart which shows the input reception regarding the makeup process in operation mode.
- ⁇ Knowledge obtained by the inventor ⁇ In a makeup simulator that performs real-time processing, an image is captured by a camera directed to the user, and the captured user image is subjected to image processing and displayed on a monitor.
- a user operates a touch panel integrated with a monitor or an operation panel provided separately, and selects cosmetics such as lipstick, teak, and mascara, and performs an application operation on the face.
- cosmetics such as lipstick, teak, and mascara
- the face direction and line of sight move to the operation panel and the monitor during the operation, and an image facing the front with respect to the camera cannot be acquired.
- the detection accuracy of feature points is reduced, so that misplacement of the feature points causes the makeup item to be misplaced and makeup Simulation quality may be degraded.
- a delay depending on the processing capacity of the hardware may occur. Since the makeup process is performed for each frame of the moving image, a delay of at least one frame occurs. Furthermore, since a feature point detection process, a decorative part deformation process, and a composition process occur, a delay of 0.1 seconds or more may occur. In such a case, when the user tries to perform a touch operation on the self-portrait on the monitor, the self-portrait on the monitor reproduces the user's own movement with a delay, so the user feels that there is an unexpected movement, It is difficult to touch the intended position.
- an image with the front facing and the eyes facing the camera is suitable. This is because the feature points are easy to detect as described above and are easy for the user to operate.
- a still image is captured by a user's imaging operation, such an image is not always obtained. This is because the line of sight or the face may face the operation panel for performing an imaging operation, or the posture may be collapsed for performing the operation.
- the inventors continue to capture a moving image after the user performs an operation to switch to still image display, detect the feature points of each frame constituting the moving image, and perform the operation related to the makeup described above.
- an image suitable as a still image is obtained, the knowledge that the frame is used as a processed image for makeup processing is obtained. In this way, the simulation can be performed based on an image suitable for the makeup process without being restricted immediately after the user's operation.
- FIG. 1 shows a block diagram of a makeup simulator according to the present embodiment.
- the makeup simulator 1 includes a camera 2, a feature point detection unit 11, a frame memory 12, a control unit 13, an image processing unit 14, and a touch panel monitor 3.
- the camera 2 includes an imaging optical system (not shown), an imaging element (not shown), an A / D conversion circuit (not shown), and the like.
- the imaging optical system includes a focus lens that performs focus control, an exposure control unit using a shutter and a diaphragm, and the like.
- a zoom lens that performs a zoom operation may be provided.
- the imaging element is a photoelectric conversion element composed of a CCD sensor or a CMOS sensor, and images a subject image formed by an imaging optical system and outputs a video signal.
- the A / D conversion circuit is a converter that converts a video signal, which is an analog signal output from the image sensor, into a digital signal.
- the digital data output from the A / D conversion circuit becomes a captured image that is the output of the camera 2.
- the camera 2 outputs digital data as a moving image to the feature point detection unit 11 in units of constituent frame images.
- the feature point detection unit 11 performs face detection for each of the frame images constituting the digital data as the moving image output from the camera 2, and then the eye contour, nose contour, mouth contour, face
- the position information is output to the control unit 13 and the image processing unit 14 as feature points.
- the feature point detection unit 11 first converts digital data as a moving image into an image of vertical M pixels ⁇ horizontal N pixels in units of frames. Next, by the methods disclosed in Patent Document 1, Patent Document 2, etc., first, a facial region is detected from the eyes, nose, mouth, contour, hair, and the like, and then the eyes, nose, which are facial parts. , Feature points are detected for each of the mouth and the contour.
- the feature point detection unit 11 outputs the coordinates of the detected feature points to the control unit 13.
- the feature point detection unit 11 receives an image output instruction from the control unit 13, the feature point detection unit 11 transfers the frame image output from the camera 2 to the frame memory.
- the frame memory 12 receives the frame image output from the feature point detection unit 11 and holds it as a processed image.
- the frame memory 12 outputs the retained processed image to the image processing unit 14.
- the frame memory 12 is realized by a storage device such as a semiconductor memory device such as a DRAM or a flash memory or a magnetic storage device such as an HDD.
- the control unit 13 manages the operation mode of the makeup simulator 1 and receives and holds the contents of the image processing from the user. Specifically, it manages whether the current operation mode is the confirmation mode or the operation mode, and receives an operation mode switching instruction from the user. Further, in the operation mode, it is managed whether or not the processed image has been confirmed. If not, the frame image currently being processed is determined as the processed image from the coordinates of the feature points output by the feature point detection unit 11. Determine whether it is appropriate. In the operation mode, when the operation image has been confirmed, the user input related to the makeup process is accepted and reflected in the subsequent image processing for the frame image.
- the image processing unit 14 performs image processing on the processed image held in the frame memory 12 using the image processing instruction output from the control unit 13 and the feature point coordinates output from the feature point detection unit 11. The result is output to the touch panel monitor 3.
- the touch panel monitor 3 is a display device composed of an LCD (Liquid Crystal Display), a PDP (Plasma Display Panel), an organic EL, and the like, and a position input that performs position detection with a touch sensor such as a capacitance method or an infrared method. It is also a device.
- the touch panel monitor 3 displays an image output from the image processing unit 14 on a display device, and inputs the coordinates to the control unit 13 when the user inputs using a touch sensor.
- Each of the feature point detection unit 11, the control unit 13, and the image processing unit 14 described above is, for example, a programmable device such as a processor and software, or a hardware such as an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuit). It is realized by.
- a programmable device such as a processor and software
- a hardware such as an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuit). It is realized by.
- makeup simulator 1 The appearance of makeup simulator 1 is shown in FIG. The user is located in front of makeup simulator 1 and takes an image of the user with camera 2 located on touch panel monitor 3.
- FIG. 1 An image displayed on the touch panel monitor 3 is shown in FIG.
- the content displayed on the touch panel monitor 3 displays an image display part that displays an image obtained by performing image processing on the user's image captured by the camera 2 and a guide for accepting the user's operation via the touch sensor. It is divided into the operation display part.
- the operation mode of the makeup simulator 1 includes a “confirmation mode” and an “operation mode”.
- the makeup simulator 1 displays the image of the user image taken by the camera 2 on the screen display portion as a moving image in real time. Therefore, since the image after image processing, that is, the simulation result is displayed in front of the user's eyes, the makeup simulator 1 becomes a digital mirror having a makeup function, and the user sees his / her face reflected in the mirror. As described above, it is possible to check the face after the makeup simulation.
- the “operation mode” makeup simulator 1 accepts a user operation via the touch sensor of touch panel monitor 3.
- the user selects a makeup item (lipstick, blusher, etc.) displayed in the operation display portion, and operates / selects the color, application shape, application position, and the like.
- the application shape and application position may be specified for the user's own image displayed in the screen display portion.
- User operation / selection information is input to the control unit 13 via the touch panel monitor 3, and the control unit 13 instructs the image processing unit 14 to change the image processing content in accordance with the operation / selection information.
- the makeup performed on the user's own image displayed on the screen display portion is immediately changed according to the user's operation / selection information. Therefore, the user can immediately confirm the makeup result and can easily perform a makeup simulation.
- the makeup simulator 1 displays a GUI for setting the operation mode on the operation display portion of the touch panel monitor 3 (S1).
- the displayed GUI is, for example, a “playback” icon indicating “confirmation mode” and a “pause” icon indicating “operation mode”.
- Make-up simulator 1 starts the operation of step S ⁇ b> 1 by a user instruction, for example, when the user touches a mode switching icon on touch panel monitor 3.
- makeup simulator 1 checks whether or not an input for selecting an operation mode has been made by the user via the operation display portion of touch panel monitor 3 (S2). At this time, the user may confirm the content of the instruction by a method such as inverting the color of the icon selected by the user or highlighting the icon frame.
- makeup simulator 1 sets the operation mode to the confirmation mode (S4). Specifically, information indicating that the operation mode is the “confirmation mode” is written in the operation mode register of the control unit 13.
- makeup simulator 1 sets the operation mode to the operation mode (S5). Specifically, information indicating that the operation mode is the “operation mode” is written in the operation mode register of the control unit 13.
- the processing image determination flag is initialized after step S4 or S5 (S6). Specifically, the contents of the processing image determination register of the control unit 13 are deleted. This is to prevent a situation where a new image cannot be written in the frame memory when the image is held in the frame memory. Details will be described later.
- makeup simulator 1 ends the operation mode setting.
- the camera 2 of the makeup simulator 1 outputs one frame image (still image) of a moving image shot of the user (S11).
- control unit 13 of the makeup simulator 1 determines the operation mode of the makeup simulator 1 (S12). Specifically, the control unit 13 reads setting information from the operation mode register.
- the makeup simulator 1 overwrites the frame memory 12 with the still image output from the camera 2 (S13). Specifically, the control unit 13 continues to send an image output instruction to the feature point detection unit 11, and the frame memory 12 receives the still image output from the camera 2 via the feature point detection unit 11 and stores it in itself. To do. By doing so, the latest frame image captured by the camera 2 is stored in the frame memory as it is.
- the operation mode is the operation mode
- the following processing is performed.
- control unit 13 of the makeup simulator 1 determines whether or not a processed image has been determined (S14). Specifically, the control unit 13 confirms whether or not information indicating that the processed image has been determined is stored in the processed image determination register.
- makeup simulator 1 ends the processing for that frame.
- the control unit 13 does not transmit an image output instruction to the feature point detection unit 11, and the determined processed image, which is a frame image one frame or more before, is held as it is in the frame memory.
- the feature point detection unit 11 of the makeup simulator 1 detects a facial feature point from the frame image (S15).
- step S15 The process of step S15 will be described with reference to FIG. First, the feature point detection unit 11 detects a face from the frame image, specifies the region, and outputs the coordinates as a feature point (S151). Next, the feature point detection unit 13 detects each part of the face such as the eyes, nose, mouth, and eyebrows from the face region, and outputs the coordinates for specifying the contour as the feature points (S152). ).
- control unit 13 of the makeup simulator 1 confirms the position of the detected feature point (S16). Specifically, the control unit 13 determines the orientation and state of the face from the feature point position of the face as an index as to whether or not the frame image is appropriate as an image serving as a base for image processing such as makeup processing. Check.
- step S16 An example of the process in step S16 will be described with reference to FIG.
- the open / closed state of the eyes is confirmed.
- eye head A (Xa, Ya), eye corner B (Xb, Yb), eye upper end C (Xc, Yc), eye lower end D (Xd, Yd) ) Is acquired.
- the horizontal size of the eye is calculated (S161).
- the horizontal size Lh of the eye can be calculated by the distance between the eye head A and the eye corner B as shown below.
- the size of the eyes in the vertical direction is calculated (S162).
- the size Lv in the vertical direction of the eye can be calculated by the distance between the upper end C of the eye and the lower end D of the eye as shown below.
- the open / closed state of the eye is confirmed based on whether or not the normalized vertical size Lvn of the eye exceeds a predetermined threshold (S164).
- the control unit 13 of the makeup simulator 1 determines whether or not to determine the frame image as a processed image based on the result of the feature point position confirmation in step S16 (S17).
- the frame image is determined as a processed image, and if not, the frame image is not determined as a processed image.
- the control unit 13 of the makeup simulator 1 sets a processed image determination flag (S18). Specifically, information indicating that the processed image has been determined is stored in the processed image determination register. As a result, the frame memory is not overwritten by the subsequent frame image until the operation mode is switched to the confirmation mode and the processed image determination flag is cleared. Therefore, the frame image remains stored in the frame memory as the processed image. It becomes.
- the frame image is stored in the frame memory as a processed image (S19). Accordingly, since the frame image becomes a processed image, the user can change the setting of the simulation process using a still image suitable for the operation.
- makeup simulator 1 overwrites frame memory 12 with the still image output from camera 2 (S13). Specifically, the control unit 13 transmits an image output instruction to the feature point detection unit 11, and the frame memory 12 stores the frame image. By doing so, the latest frame image captured by the camera 2 is stored in the frame memory as it is, and the processed image is selected again from the frame images after the next frame.
- FIG. 8 shows the operation of makeup simulator 1 when makeup simulation is performed.
- movement similar to FIG. 3 the same step number is attached
- the feature point detection unit 13 reads out the processed image from the frame memory, and the feature points such as the eyes, nose, mouth, and eyebrows, the feature points of the contour, etc.
- the feature point of the face part to be detected is detected (S20).
- the image processing unit 15 performs a makeup process (S21).
- the contents of the makeup process are received from the user using the operation part of the touch panel monitor 3 and the image display part during the operation mode.
- a makeup process for applying the lipstick A to the lips is performed.
- the position of the lips is specified from the mouth feature point detected in step S20, a layer of lipstick A is created, and is combined with the processed image.
- step S2 the user confirms the content of the instruction by a method such as reversing the color of the icon selected by the user or highlighting the icon frame. You may be notified of changes.
- step S2 makeup simulator 1 displays the GUI on touch panel monitor 3 and confirms the user's input, but the instruction from the user in step S2 is not limited to the input to touch panel monitor 3, An input from another input device such as a mouse may be used, or a gesture operation that can be recognized from an image captured by the camera 1 may be used.
- the operation mode may be set as follows. For example, it is assumed that the touch operation on the image display portion of the touch panel monitor 3 is an input for instructing the operation mode change, and if there is a touch operation on the image display portion, the operation mode is changed to the confirmation mode or the confirmation mode is operated. Switching to the mode may be performed.
- step S1 the user starts the operation mode setting process by touching the image display portion of the touch panel monitor 3. For example, a specific scene is detected and the operation mode setting is automatically performed. It is good. Specifically, when the environment changes, such as when the user suddenly moves or when a shadow is suddenly darkened, an image that is inappropriate for display can be displayed by temporarily switching to the operation mode. It is possible to prevent display. In this case, it is possible to automatically return to the confirmation mode after confirming the stability of the environment, or to return to the confirmation mode according to a user instruction.
- step S16 the size of the eye in the horizontal direction is used for normalizing the size of the eye in the vertical direction, but other values, for example, the size of the face in the vertical direction using the face size. Normalization may be performed.
- step S16 the normalized size in the vertical direction of the eyes is used for eye opening / closing confirmation, but for example, eye opening / closing confirmation may be performed as follows. That is, the position of the black eye is estimated from the feature point position of the eye, and the brightness or color near the portion corresponding to the black eye position is confirmed. As for the black eye position, the intersection position of a straight line connecting the top of the eye and the bottom of the eye and a straight line connecting the upper end of the eye and the lower end of the eye can be estimated as the black eye position. As for eye opening / closing judgment, if the brightness of the pixel at the intersection is dark or the color is black, it is determined that the eyes are open, and if the brightness is light or the color is skin color, the eyes are closed. .
- step S16 and step S17 the open / closed state of the eyes was confirmed as an index as to whether or not the frame image is appropriate as an image serving as a base for image processing such as makeup processing.
- the degree of smile may be detected and the state of smile may be used as an index. In this case, for example, the determination can be made based on whether or not the mouth corner is raised.
- a plurality of features may be used as an index.
- step S20 the feature point detection unit 11 detects the feature point of the processed image.
- step S20 may be exactly the same as step S15 or necessary for the makeup process in step S21. Only characteristic points may be detected.
- the result of feature point detection in step S15 may be retained, and the feature point retained in the makeup process in step S21 may be used.
- FIG. 9 shows a flowchart.
- step S23 the result of step S15 is held, and in step S21, it is used instead of the result of step S20. In this way, when the processed image is held in the frame memory, it is not necessary to repeatedly perform the feature point detection in step S20 for the same processed image, and the processing amount can be reduced.
- Embodiment 2 >> In the first embodiment, the processing image used in the operation mode has been described in the frame processing unit or the case where the feature point position is confirmed only once. In this embodiment, the feature point position is interpolated from a plurality of frame images. The case where it does is demonstrated.
- FIG. 10 is a flowchart showing the operation according to this embodiment. 10, the same operations as those in FIG. 4 are denoted by the same step numbers, and the description thereof is omitted.
- step S31 the control unit 13 confirms whether or not a preset designated time has elapsed. Specifically, it is confirmed whether or not a frame image of only “specified time ⁇ frame rate” has been processed. For example, when the specified time is 2 seconds and the frame rate is 60 frames per second (60 fps), it is confirmed whether 120 frames have been processed.
- the count of the designated time starts when the last designated time elapses or when the operation mode is switched to the operation mode, whichever is later.
- the control unit 13 includes a counter, and the counter is initialized when Step S31 is Yes and at Step S6.
- step S31 the feature point position of the frame image is held (S32), and the same processing as when the processing image is not determined (No in S17) is performed. On the other hand, if Yes in step S31, the feature point position is confirmed in step S16, and it is determined in step S17 whether or not to hold it as a processed image.
- step S34 an average value is calculated for each feature point with respect to the feature point position held in step S32.
- the average position of the coordinates of a plurality of held eye heads A is calculated.
- the average value of the coordinates of the eye corner B, the average value of the coordinates of the upper edge C of the eye, and the lower end D of the eye Calculate the average value of the coordinates.
- the coordinates of the head A of the nth frame are (Xan, Yan)
- the average value (Xaave, Yaave) is calculated by the following equation.
- the position of the feature points is averaged in the time direction, so that the influence of noise feature points due to misdetection of feature points, deviation of feature points, etc. is suppressed, and stable feature point positions Can be obtained.
- FIG. 11 shows a flowchart for performing the makeup simulation.
- the same step numbers are used for the same processes as in FIGS. 4, 8, and 9, and the description thereof is omitted.
- the makeup process in step S21 is performed using the average of the feature point positions calculated in step S34.
- the position of eyelids and eyelids is determined using the average values of the coordinates of the above-mentioned eye head A, eye corner B, eye upper end C, and eye lower end D as the coordinates of the eye feature points. Detect and add false eyelashes.
- This process stabilizes the feature point position, which makes it possible to stabilize the makeup simulation.
- step S34 the averaging of the coordinates in the time direction is performed.
- the filter operation is not limited to this case. For example, as in the median filter, only the median of the feature point positions before and after in time is used. May be selected.
- FIG. 12 shows a flowchart of the frame processing. After calculating the average of the feature point positions in step S34, the average of the feature point positions is recalculated (S35).
- step S35 The process of step S35 will be described with reference to FIG. Loop 1 is performed for each feature point held.
- Loop 1 Loop 2 is performed for each frame image within the specified period.
- the average of the feature points calculated in step S34 is compared with the position of the feature point, and it is determined whether or not the difference is greater than or equal to a predetermined value (S351). If the difference is greater than or equal to a predetermined value, the feature point is excluded (S351). By doing in this way, about each feature point, what remove
- Embodiment 3 In the present embodiment, the operation mode and the confirmation mode will be described in detail.
- FIG. 14 shows three phases according to mode switching.
- the left phase ph1 is a phase immediately after switching to the operation mode. As shown in the first row, this phase is started by the user performing an operation indicating the start of the operation mode on the touch panel monitor 3, and the individual frame images constituting the moving image are displayed on the touch panel monitor 3 in real time. Is displayed. As described above, if the displayed frame image is suitable for the processed image, the frame image is selected as the processed image, and the still image to be subjected to the makeup operation is determined.
- the middle phase ph2 is a phase for accepting a makeup operation based on a still image as a processed image.
- the first level of Ph2 indicates that the mode is set to the operation mode.
- the second level shows a situation where a makeup operation is being performed.
- the frame image Fx in the second stage is a processed image as a still image selected in the phase ph1.
- a frame image Fx which is a self-portrait captured as a still image, is displayed on the image display portion of the touch panel monitor 3, and the user performs an operation of tracing the surface of the touch panel monitor 3 with a finger.
- a hand-painted image of eyebrows, a hand-painted image of blusher, and a hand-painted image of lipstick are combined into a still image.
- the rightmost phase ph3 shows the situation where the mode has been switched to the confirmation mode after the hand painting operation in the operation mode.
- the frame image Fx + m means any one frame image among the moving images displayed in the confirmation mode.
- a feature point group including a plurality of feature points is detected from the frame image Fx and the frame image Fx + m in step S20 (or the feature point group detected in step S15 is held in step S23). ing). These feature points define the contour shape of the part of the object.
- the makeup simulator 1 associates feature points by searching for corresponding points in the frame image Fx set as a still image and the subsequent frame image Fx + m.
- the corresponding point search between the frame images is performed by calculating a correlation value based on a luminance value or the like for each pixel and detecting a pixel having the highest correlation value. If a makeup operation by hand-painting is performed on any part that appears in the still image, it is a part in the subsequent frame image that has been associated with the part related to the operation by the corresponding point search, Map the hand-painted image for the operation.
- FIG. 15A shows a feature point group detected in the frame image Fx and the frame image Fx + m.
- the feature point groups gp1, gp2, gp3, and gp4 in FIG. 15A surround the representative parts (eyebrows, lips, cheeks) of the face image in the frame image Fx, and define the contour shape of the parts. .
- the feature point groups gp11, gp12, gp13, and gp14 in the figure surround a representative part of the face image in the frame image Fx + m and define the contour shape of the part.
- Arrows sr1, sr2, sr3, and sr4 schematically show the process of searching for corresponding points performed between the feature points in the frame image Fx and the feature points in the frame image Fx + m. This correspondence point search defines the correspondence between the feature point group that defines the eyebrow in the frame image Fx and the feature point group that defines the eyebrow in the frame image Fx + m.
- FIG. 15B shows a transformation matrix that defines transformation of feature points between the frame image Fx and the frame image Fx + m.
- H1, H2, H3, and H4 in FIG. 15B indicate transformation matrices that define feature point transformation between corresponding parts.
- the hand-painted image obtained by the hand-painting operation on the frame image Fx is mapped to the frame image Fx + m. Thereby, the hand-painted image is deformed and displayed according to the appearance of the feature points in the frame image Fx + m.
- a plurality of feature points i 1, i 2, i 3, i 4,..., I 8 existing on the upper left side of FIG. 16A characterize the shape of the eyebrows in the face image of the frame Fx.
- a plurality of feature points j1, j2, j3, j4,..., J8 existing on the upper right side of FIG. 16A characterize the shape of the eyebrows in the face image of the frame Fx + m.
- the transformation matrix H in FIG. 16B is a matrix for transforming the feature points i1, i2, i3, i4,..., I8 in the frame image Fx into the feature points j1, j2, j3, j4,. And composed of 8 ⁇ 8 matrix components.
- FIG. 16C shows a situation where a hand-painted image is drawn on a still image.
- a locus trk1 in the figure is a locus obtained by tracing the face drawn on the image display portion of the touch panel monitor 3 with a finger, and this locus is specified as a hand-painted image.
- the middle arrow cv1 schematically shows the conversion of the hand-painted image using the conversion matrix H shown in FIG.
- a hand-painted image trk2 in the figure is a hand-painted image synthesized with the frame image Fx + m through the conversion using the conversion matrix H.
- FIG. 17 shows the process until the still image serving as the base of the confirmation mode is determined when the operation mode is switched from the confirmation mode to the operation mode.
- the first level shows the transition of the operation mode setting
- the second level is the frame image input from the camera 2
- the third level is the facial part detection according to the feature point detection
- the fourth level is the frame memory. 12 shows an image stored.
- Time t1 is a time when the operation mode is switched from the confirmation mode to the operation mode.
- the frame image Ft1 is a frame image captured at the time point t1. Since the time t1 is the moment when the user operates the operation part of the touch panel monitor 3, the face does not face the front in the frame image Ft1. Therefore, in the feature point position confirmation in step S16, it is detected that the line-of-sight direction is strange, and the processing image is not determined in step S17.
- the frame image Ft5 is a frame image that captures the moment when the user closes his eyes. In the feature point position confirmation in step S16, it can be detected that the user's eyes are closed.
- the frame image Ft9 is a frame image that captures the user facing the front of the camera for the first time after time t1.
- the frame image Ft9 is held in the frame memory 12, and the image display portion of the touch panel monitor 3 is switched from the real-time moving image display to the still image display for displaying the frame image Ft9. Accepted.
- FIG. 18 is a flowchart showing input reception regarding the makeup process in the operation mode.
- the process proceeds to a loop of step S101 to step S103.
- This loop determines whether a touch on a makeup item has occurred (S101), whether a touch on a face in a still image has occurred (S102), or whether switching to a confirmation mode has occurred (S103). .
- Step S101 is Yes, the touched cosmetic item is selected (S104). If the touch on the face image occurs, the touched part is specified (S105), and it is determined whether the finger coating operation is continued (S106). If it continues, the drawing corresponding to the hand-painted image is continued following the operation (S107). When the operation is completed, the hand-painted image drawn by the previous operation is held as a makeup simulation setting (S108).
- makeup simulation is executed for each subsequent frame image in step S21 in accordance with the setting held in step S108. Specifically, a facial feature point is extracted from the subsequent frame image (step S20), a mapping site in the frame image is specified according to the feature point, and a hand-painted image is mapped.
- makeup simulator 1 has camera 2 and touch panel monitor 3, but the present invention is not necessarily limited to this case.
- the makeup simulator 1 may not include the camera 2 and may acquire an image from an external camera or the like.
- a monitor having only a display function and an input device different from the monitor may be provided instead of the touch panel monitor 3, a monitor having only a display function and an input device different from the monitor may be provided. In this case, the monitor may display only the screen display part.
- the makeup-related instruction is a hand-painting instruction, but the present invention is not necessarily limited to this case.
- an instruction to change the color of the cosmetic may be received for the makeup instruction that has already been made.
- the face makeup simulation is performed, but the present invention is not necessarily limited to this case.
- feature points such as shoulders, arms, chests, and waists may be detected from the upper body and the whole image, and a dress-up simulation may be performed.
- the subject is not limited to a person, and for example, feature points such as a ceiling, a windshield, and a mirror may be detected from an appearance image of a car, and a mounting simulation of a wheel, an aero part, etc., or a paint change simulation may be performed. By doing in this way, it is possible to perform a simulation only by taking a picture while turning around the subject including the front and sides.
- Each of the above devices is specifically a computer system including a microprocessor, ROM, RAM, a hard disk unit, a display unit, a keyboard, a mouse, and the like.
- a computer program is stored in the RAM or hard disk unit.
- Each device achieves its functions by the microprocessor operating according to the computer program.
- the computer program is configured by combining a plurality of instruction codes indicating instructions for the computer in order to achieve a predetermined function.
- the system LSI is an ultra-multifunctional LSI manufactured by integrating a plurality of components on a single chip, and specifically, a computer system including a microprocessor, ROM, RAM, and the like. .
- a computer program is stored in the RAM.
- the system LSI achieves its functions by the microprocessor operating according to the computer program.
- each of the above devices may be constituted by an IC card that can be attached to and detached from each device or a single module.
- the IC card or the module is a computer system including a microprocessor, ROM, RAM, and the like.
- the IC card or the module may include the super multifunctional LSI described above.
- the IC card or the module achieves its function by the microprocessor operating according to the computer program. This IC card or this module may have tamper resistance.
- the present invention may be the method described above. Further, the present invention may be a computer program that realizes these methods by a computer, or may be a digital signal composed of the computer program.
- the present invention also provides a computer-readable recording medium such as a flexible disk, hard disk, CD-ROM, MO, DVD, DVD-ROM, DVD-RAM, BD (Blu-ray ( (Registered trademark) Disc), or recorded in a semiconductor memory or the like.
- the digital signal may be recorded on these recording media.
- the computer program or the digital signal may be transmitted via an electric communication line, a wireless or wired communication line, a network represented by the Internet, a data broadcast, or the like.
- the present invention may be a computer system including a microprocessor and a memory, the memory storing the computer program, and the microprocessor operating according to the computer program.
- the program or the digital signal is recorded on the recording medium and transferred, or the program or the digital signal is transferred via the network or the like, and executed by another independent computer system. It is good.
- An image processing method is an image processing method in a system that performs image processing on an input moving image and displays a result of the image processing, and the operation mode setting of the system is Including a determination process as to whether the operation mode for displaying a still image or a confirmation mode for displaying a moving image, and a process for each mode according to a result of the determination process.
- the mode-specific processing in the confirmation mode includes processing for performing image processing on each frame of the moving image and displaying it for user confirmation, and the image Management is an object according to the operation received in the operation mode, characterized in that it is performed in the object appearing in each frame of the moving image.
- An image processing apparatus is an image processing apparatus that performs image processing on a moving image input and displays a result of the image processing.
- the mode setting of the image processing apparatus displays a still image.
- An operation mode determination unit that determines whether the operation mode is a confirmation mode for displaying a moving image, an image processing unit that performs processing according to a mode according to a determination result of the operation mode determination unit, and the image processing unit
- a display unit that displays an image subjected to image processing, and in the operation mode, the image processing unit appropriately displays an object of image processing from among a plurality of frame images constituting a moving image.
- a still image is selected and output as a result of image processing, and an operation related to image processing for the object is accepted.
- the image processing unit An object related to an operation, and if the operation is performed on an object appearing in each frame of the moving image, image processing is performed on the object appearing in a subsequent frame image and used for user confirmation. And
- An image processing program is a program for performing image processing on a moving image input and displaying a result of the image processing in a system that causes a processor to perform image processing.
- the mode setting of the system includes a determination as to which of an operation mode for displaying a still image and a confirmation mode for displaying a moving image, and a process for each mode according to a determination result, and the process for each mode in the operation mode Includes a process of selecting a still image in which an object of image processing appropriately appears from a plurality of frame images constituting the moving image and displaying the selected still image as a still image, and receiving an operation related to the image processing on the object.
- the mode-specific processing in the confirmation mode is a process of performing image processing on each frame of the moving image and displaying it for user confirmation.
- Hints the image processing, in the object according to the operation received in the operation mode, characterized in that it is performed in the object appearing in each frame of the moving image.
- the system stores a single frame image constituting a moving image in a frame memory, thereby processing an image to be subjected to image processing. If the frame image stored in the frame memory is the target of the mode-specific processing and the operation mode setting of the system is the confirmation mode, if there is an input of the frame image constituting the moving image, When the processed image is updated by overwriting the processed image stored in the frame memory using the frame image, and the operation mode setting of the system is the operation mode, the object appears appropriately. The selected still image is stored in the frame memory, and after the still image is stored in the frame memory, the system mode setting is switched to the confirmation mode. Until alternative, not to update the image stored in the frame memory may be.
- the mode-specific processing when the operation mode setting of the system is the operation mode, the mode-specific processing further targets the input frame image. Whether the object appears properly in the currently input frame image according to the result of the feature point position confirmation, including feature point position detection and feature point position confirmation for confirming the feature point position. It may be determined whether or not.
- the object is a human face
- a feature point of a face eye is detected
- the open / close state of the eye is confirmed from the detected position of the feature point of the eye
- the currently input frame image is
- the frame memory may be stored as the still image, and when the open / closed state of the eyes is closed, the frame memory may be updated using a subsequent frame image.
- the object is a human face
- a face direction is detected, and the feature point is detected.
- it is confirmed from the detected face orientation whether the face is front-facing. If the face orientation is front-facing, the currently input frame image is stored in the frame memory. If the face orientation is not the front orientation, the frame memory may be updated using a subsequent frame image.
- the mode-specific processing corresponding to the operation mode is performed by detecting a feature point position for detecting a feature point position of a face from the processed image.
- the mode-specific processing corresponding to the confirmation mode includes: makeup processing for applying makeup to a portion present at the detected feature point position among objects appearing in the processed image; and the makeup And display of an image that has been subjected to an up process.
- the selection of a sample color is accepted from a user, and among the objects displayed in the processed image, the feature It is good also as mapping the value of the some pixel which comprises the site
- the selection of a sample color and a finger painting operation on an object shown in a still image are received from a user.
- a drawing pattern having a shape related to a finger painting operation, which is composed of the selected sample color pixels, is mapped to a part existing at the feature point position among the objects displayed in the processing target It may be to do.
- the feature point defines a contour shape of a part of the object, and the makeup process is performed as the processed image. If a corresponding point search is performed between the set frame image and the subsequent frame image, the feature points are associated with each other, and if any part of the image displayed in the processed image is manually painted The hand-painted image related to the hand-painting operation is mapped to the part in the subsequent frame image, which is associated with the part related to the hand-painting operation by the feature point search. Also good.
- the mode-specific processing corresponding to the operation mode further includes holding the feature point position, and is the target of the makeup processing.
- the part may exist at the feature point position being held.
- the feature point used as an index as to whether or not the object appears properly can be used for the makeup process as it is, and the calculation amount can be reduced.
- the mode-specific processing corresponding to the operation mode is performed by holding a feature point position in a predetermined period and holding a specified period.
- a feature point position filter calculation for performing a filtering process for each pixel existing at the feature point position, and the mode-specific processing corresponding to the confirmation mode uses the processed image as an input,
- a feature point position as a result of the feature point position filter calculation is used instead of the feature point position detected by the feature point position detection.
- the feature point position may be confirmed by using this.
- the filter processing performed in the feature point filter calculation may be an average.
- the filter processing may be executed again after comparing the feature point positions and excluding the feature point positions whose difference is larger than the threshold from the held feature point positions.
- a makeup process and a makeup process are performed on a part specified by a feature point position among objects appearing in the processed image.
- the makeup process accepts a sample color selection from the user and selects the values of a plurality of pixels constituting the part designated by the feature point position. This may be done by mapping to the color range of the sample color.
- the system mode may be set by the user.
- the operation mode can be changed according to the user's needs.
- the mode setting of the system may be performed by a user touching a specific area of a touch panel included in the system. .
- the image processing method according to (16) according to the embodiment may notify the user of the change in the mode setting when the system mode is set.
- the notification may be made by displaying an operation mode.
- the notification may be made by voice output.
- the image processing method according to the present invention has an appropriate selection method for a processed image for makeup simulation during screen operation, and is useful as a makeup simulator terminal or the like. It can also be applied to applications such as digital signage with a makeup simulation function.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
リアルタイム処理を行うメイクアップシミュレータでは、ユーザに向けたカメラで撮像し、撮像したユーザの画像に画像処理を施してモニタに表示する。ここで、ユーザがメイクアップシミュレーションを行うためには、モニタと一体化したタッチパネルや、別途設けられた操作パネルなどを操作し、口紅やチーク、マスカラといった化粧品を選んで顔に塗布操作などを行う必要がある。そのため、操作中は顔の向きや目線が操作パネルやモニタに移ってしまい、カメラに対して正面を向いた画像が取得できなくなる。このような、顔の向きや目線がカメラから外れた画像に対しては、特徴点の検出精度が低下してしまうため、特徴点の誤検出により、化粧アイテムの位置がずれてしまい、メイクアップシミュレーション品質が低下することがある。
以下本発明の実施の形態について、図面を参照しながら説明する。
<構成>
本実施の形態に係るメイクアップシミュレータのブロック図を図1に示す。メイクアップシミュレータ1は、カメラ2と、特徴点検出部11と、フレームメモリ12と、制御部13と、画像処理部14と、タッチパネルモニタ3とを備える。
メイクアップシミュレータ1の外観と動作の概要について説明する。
図3を用いて、動作モードの設定動作について説明する。
次に、図4を用いてフレーム単位の動作を説明する。なお、ここでは、画像処理(メイクアップ処理)は行わず、単にデジタルミラーとして動作する場合について説明する。
図8に、メイクアップシミュレーションを行う場合のメイクアップシミュレータ1の動作を示す。なお、図3と同様の動作については、同じステップ番号を付し、説明を省略する。
かかる構成によれば、ユーザが任意のタイミングで操作モードに設定しても、特徴点位置確認S105を行うことにより、瞬きなどに影響されずにメイクシミュレーションに対して良好な画像を得ることが可能である。そのため、ユーザは気軽にメイクアップシミュレーション結果を確認できるため、実際の化粧品を試す手間を軽減することができる。
(1)ステップS2において、ユーザによって選択されたアイコンの色を反転させたり、アイコン枠を強調させたりなどの方法で、ユーザに指示内容の確認を行うとしたが、特定の音によりその動作モードの変化を通知してもよい。
実施の形態1では、操作モードで用いる処理画像についてフレーム処理単位で、または1度だけ特徴点位置の確認をする場合について説明したが、本実施の形態では複数のフレーム画像から特徴点位置を補間する場合について説明する。
ステップS31において、制御部13は、予め設定された指定時間が経過したか否かを確認する。具体的には、「指定時間×フレームレート」だけのフレーム画像を処理したか否かを確認する。例えば、指定時間が2秒で、フレームレートが秒間60フレーム(60fps)の場合、120フレームを処理したか否かを確認する。ここで、指定時間のカウントの開始は、直前の指定時間の経過時、または、動作モードが操作モードに切り替わった時のうち、いずれか遅い方である。具体的には、制御部13はカウンタを備え、カウンタは、ステップS31がYesの場合、および、ステップS6において初期化される。
メイクアップシミュレーションを行う場合のフローチャートを図11に示す。図4、図8、および、図9と同じ処理については同じステップ番号を用い、説明を省略する。
(1)ステップS34において、座標の時間方向における加算平均を行うものとしたが、フィルタ演算はこの場合に限られず、例えば、メディアンフィルタのように、時間的に前後の特徴点位置について中央値のみを選択してもよい。
本実施の形態では、操作モードと確認モードとについて詳細に説明する。
かかる構成によれば、ユーザは必要であれば化粧ツールを選択したのち、タッチパネルモニタ上の自身の顔画像に手塗操作を行うという簡易な操作だけで、所望の化粧を行った状態の自身の顔を確認することができる。そのため、実際に化粧を行うことなく、様々なメイクアップを試すことができ、ユーザにとって最適な化粧品を提案することが可能となる。
(1)実施の形態1~3では、メイクアップシミュレータ1はカメラ2とタッチパネルモニタ3とを有するとしたが、本発明は必ずしもこの場合に限定されない。例えば、メイクアップシミュレータ1はカメラ2を備えず、外部のカメラ等から画像を取得するとしてもよい。また、例えば、タッチパネルモニタ3に代えて、表示機能のみを有するモニタと、モニタとは別の入力デバイスを備えてもよい。この場合、モニタは、画面表示部分のみを表示してよい。
以下に、実施の形態に係る画像処理方法、画像処理装置、画像処理プログラムの構成及び効果について説明する。
2 カメラ
11 特徴点検出部
12 フレームメモリ
13 制御部
14 画像処理部
3 タッチパネルモニタ
Claims (21)
- 入力された動画像に対して画像処理を行い、画像処理の結果を表示するシステムにおける画像処理方法であって、
前記システムの動作モード設定が、静止画を表示する操作モード、動画を表示する確認モードの何れであるかの判定処理と、
前記判定処理の結果に応じたモード別処理とを含み、
前記操作モードにおける前記モード別処理は、前記動画像を構成する複数のフレーム画像の中から画像処理の対象物が適正に表れているフレーム画像を選んで静止画として表示し、前記対象物に対する画像処理に係る操作を受け付ける処理を含み、
前記確認モードにおける前記モード別処理は、前記動画像の各フレームに対して画像処理を行ってユーザの確認に供するために表示する処理を含み、前記画像処理は、前記操作モードにおいて受け付けた前記操作に係る対象物であって、前記動画像の各フレームに現れる対象物に行われる
ことを特徴とする画像処理方法。 - 前記システムは、動画像を構成する1のフレーム画像をフレームメモリに格納することで、画像処理の対象となる処理画像を保持し、
前記フレームメモリに格納されているフレーム画像を前記モード別処理の対象とし、
前記システムの動作モード設定が確認モードである場合、動画像を構成するフレーム画像の入力があれば、そのフレーム画像を用いて、前記フレームメモリに格納された処理画像を上書きすることで、処理画像の更新を行い、
前記システムの動作モード設定が操作モードである場合、対象物が適正に表れているとして選ばれた静止画を前記フレームメモリに格納し、前記フレームメモリに前記静止画を格納してから前記システムのモード設定が確認モードに切り替わるまでの間、前記フレームメモリに格納されている画像を更新しない
ことを特徴とする請求項1に記載の画像処理方法。 - 前記システムの動作モード設定が操作モードである場合、
前記モード別処理は、さらに、入力されたフレーム画像を対象とした特徴点位置の検出と、特徴点位置を確認する特徴点位置の確認とを含み、
前記特徴点位置の確認の結果に応じて、現在入力されているフレーム画像に適正に対象物が現れているかどうかを判断する
ことを特徴とする請求項2に記載の画像処理方法。 - 前記対象物は、人の顔であって、
前記特徴点位置の検出において、顔の目の特徴点を検出し、
前記特徴点位置の確認において、検出された前記目の特徴点の位置から目の開閉状態を確認し、
前記目の開閉状態が開いている状態の場合、前記現在入力されているフレーム画像を前記フレームメモリに前記静止画として保持し、
前記目の開閉状態が閉じている状態の場合、後続するフレーム画像を用いて前記フレームメモリを更新する
ことを特徴とする請求項3に記載の画像処理方法。 - 前記対象物は、人の顔であって、
前記特徴点位置の検出において、顔の向きを検出し、
前記特徴点位置の確認において、検出された前記顔の向きから顔が正面向きか否かを確認し、
前記顔の向きが正面向きである場合、前記現在入力されているフレーム画像を前記フレームメモリに前記静止画として保持し、
前記顔の向きが正面向きでない場合、後続するフレーム画像を用いて前記フレームメモリを更新する
ことを特徴とする請求項3に記載の画像処理方法。 - 前記操作モードに対応する前記モード別処理は、前記処理画像から顔の特徴点位置を検出する特徴点位置の検出を含み、
前記確認モードに対応する前記モード別処理は、前記処理画像に現れた対象物のうち、検出された前記特徴点位置に存在する部位に対してメイクアップを施すメイクアップ処理と、前記メイクアップ処理が施された画像の表示とを含む
ことを特徴とする請求項1に記載の画像処理方法。 - 前記メイクアップ処理は、見本色の選択をユーザから受け付けて、前記処理画像に現された対象物のうち、前記特徴点位置に存在する部位を構成する複数の画素の値を、選択された前記見本色のカラーレンジにマッピングすることである
ことを特徴とする請求項6に記載の画像処理方法。 - 前記メイクアップ処理は、見本色の選択と、静止画に現された対象物に対する指塗操作とをユーザから受け付け、指塗操作に係る形状をなす描画パターンであって、選択された前記見本色の画素で構成されるものを、処理対象に現された対象物のうち、前記特徴点位置に存在する部位にマッピングすることである
ことを特徴とする請求項6に記載の画像処理方法。 - 前記特徴点は、対象物における部位の輪郭形状を規定するものであって、
前記メイクアップ処理は、前記処理画像として設定されたフレーム画像と、後続するフレーム画像とで対応点検索を行うことで、特徴点の対応付けを行い、前記処理画像に現されるいずれかの部位に対して手塗操作が行われれば、後続するフレーム画像における部位であって、前記特徴点検索により前記手塗操作に係る部位との対応付けがなされたものに、前記手塗操作に係る手塗イメージをマッピングすることである
ことを特徴とする請求項6に記載の画像処理方法。 - 前記操作モードに対応する前記モード別処理は、前記特徴点位置の保持をさらに含み、
前記メイクアップ処理の対象となる部位は、保持されている前記特徴点位置に存在する
ことを特徴とする請求項6に記載の画像処理方法。 - 前記操作モードに対応する前記モード別処理は、あらかじめ指定された期間における特徴点位置の保持と、保持された指定期間の特徴点位置について、特徴点位置に存在する画素ごとにフィルタ処理を行う特徴点位置フィルタ演算とを含み、
前記確認モードに対応する前記モード別処理は、前記処理画像を入力として、特徴点位置を検出する特徴点位置検出を含み、
前記動作モードが操作モードである場合、前記特徴点位置検出で検出された特徴点位置に代えて、前記特徴点位置フィルタ演算の結果である特徴点位置を用いて特徴点位置の確認を行う
ことを特徴とする請求項3に記載の画像処理方法。 - 前記特徴点フィルタ演算で行われるフィルタ処理は平均である
ことを特徴とする請求項11に記載の画像処理方法。 - 前記特徴点位置フィルタ演算において、前記フィルタ処理を行った後、前記保持されている特徴点位置と前記フィルタ処理後の特徴点位置とを比較し、差異が閾値より大きい特徴点位置を前記保持されている特徴点位置から除外してから、前記フィルタ処理を再度実行する
ことを特徴とする請求項11に記載の画像処理方法。 - 処理画像に現れた対象物のうち、特徴点位置で指定された部位に対するメイクアップ処理と、
メイクアップ処理が施された画像の表示と、
前記システムのモードの設定とを含み、
前記メイクアップ処理は、見本色の選択をユーザから受け付けて、特徴点位置で指定された部位を構成する複数画素の値を、選択された見本色のカラーレンジにマッピングすることでなされる
ことを特徴とする請求項11に記載の画像処理方法。 - 前記システムのモードの設定はユーザによりなされる
ことを特徴とする請求項1に記載の画像処理方法。 - 前記システムのモードの設定は、前記システムが備えるタッチパネルの特定領域へユーザがタッチ操作することでなされる
ことを特徴とする請求項15に記載の画像処理方法。 - 前記システムのモードの設定がなされた際に、前記モード設定の変更をユーザに通知する
ことを特徴とする請求項16に記載の画像処理方法。 - 前記通知は、動作モードの表示によりなされる
ことを特徴とする請求項17に記載の画像処理方法。 - 前記通知は、音声出力によりなされる
ことを特徴とする請求項17に記載の画像処理方法。 - 動画像入力に対して画像処理を行い、画像処理の結果を表示する画像処理装置であって、
前記画像処理装置のモード設定が、静止画を表示する操作モード、動画を表示する確認モードの何れであるかを判定する動作モード判定部と、
前記動作モード判定部の判定結果に応じたモード別処理を行う画像処理部と、
前記画像処理部が画像処理を行った画像を表示する表示部とを含み、
前記操作モードにおいて、前記画像処理部は、動画像を構成する複数のフレーム画像の中から、画像処理の対象物が適正に表れている静止画を選んで画像処理の結果として出力し、前記対象物に対する画像処理に係る操作を受け付け、
前記確認モードにおいて、前記画像処理部は、前記操作モードにおいて受け付けた前記操作に係る対象物であって、前記動画像の各フレームに現れる対象物に前記操作がなされていれば、後続するフレーム画像に現れる対象物に画像処理を行い、ユーザの確認に供する
ことを特徴とする画像処理装置。 - 動画像入力に対して画像処理を行い、画像処理の結果を表示するシステムにおいて、プロセッサに画像処理を行わせるプログラムであって、前記画像処理は、
前記システムのモード設定が、静止画を表示する操作モード、動画を表示する確認モードの何れであるかの判定と、
判定結果に応じたモード別処理とを含み、
前記操作モードにおける前記モード別処理は、動画像を構成する複数のフレーム画像の中から画像処理の対象物が適正に表れている静止画を選んで静止画として表示し、前記対象物に対する画像処理に係る操作を受け付ける処理を含み、
前記確認モードにおける前記モード別処理は、前記動画像の各フレームに対して画像処理を行ってユーザの確認に供するために表示する処理を含み、前記画像処理は、前記操作モードにおいて受け付けた前記操作に係る対象物であって、前記動画像の各フレームに現れる対象物に行われる
ことを特徴とするプログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015512315A JP6264665B2 (ja) | 2013-04-17 | 2014-04-16 | 画像処理方法および画像処理装置 |
US14/784,743 US9968176B2 (en) | 2013-04-17 | 2014-04-16 | Image processing method and image processing device |
CN201480013933.2A CN105164999B (zh) | 2013-04-17 | 2014-04-16 | 图像处理方法及图像处理装置 |
EP14785844.3A EP2988486B1 (en) | 2013-04-17 | 2014-04-16 | Selection of a suitable image for receiving on-screen control |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013086341 | 2013-04-17 | ||
JP2013-086341 | 2013-04-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014171142A1 true WO2014171142A1 (ja) | 2014-10-23 |
Family
ID=51731104
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/002170 WO2014171142A1 (ja) | 2013-04-17 | 2014-04-16 | 画像処理方法および画像処理装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9968176B2 (ja) |
EP (1) | EP2988486B1 (ja) |
JP (1) | JP6264665B2 (ja) |
CN (1) | CN105164999B (ja) |
WO (1) | WO2014171142A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016202215A1 (zh) * | 2015-06-19 | 2016-12-22 | 阿里巴巴集团控股有限公司 | 实现动态图片预览的方法和装置、表情包展示方法和装置 |
CN106709400A (zh) * | 2015-11-12 | 2017-05-24 | 阿里巴巴集团控股有限公司 | 一种感官张闭状态的识别方法、装置及客户端 |
CN110050251A (zh) * | 2016-12-06 | 2019-07-23 | 皇家飞利浦有限公司 | 向用户显示引导指示符 |
JP2021530031A (ja) * | 2018-07-27 | 2021-11-04 | 北京微播視界科技有限公司Beijing Microlive Vision Technology Co., Ltd | 顔に基づく特殊効果発生方法、装置および電子機器 |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10007845B2 (en) * | 2015-07-06 | 2018-06-26 | Pixart Imaging Inc. | Eye state detecting method and eye state detecting system |
DE112015007219T5 (de) * | 2015-12-23 | 2021-09-09 | Intel Corporation | Berührungsgestenerkennungs-Bewertung |
GB201603495D0 (en) * | 2016-02-29 | 2016-04-13 | Virtual Beautician Ltd | Image processing system and method |
CN109310196B (zh) * | 2016-07-14 | 2021-08-10 | 松下知识产权经营株式会社 | 化妆辅助装置以及化妆辅助方法 |
CN107665350A (zh) * | 2016-07-29 | 2018-02-06 | 广州康昕瑞基因健康科技有限公司 | 图像识别方法及系统和自动聚焦控制方法及系统 |
US20180092595A1 (en) * | 2016-10-04 | 2018-04-05 | Mundipharma Laboratories Gmbh | System and method for training and monitoring administration of inhaler medication |
CN108804975A (zh) | 2017-04-27 | 2018-11-13 | 丽宝大数据股份有限公司 | 唇彩指引装置及方法 |
JP7013786B2 (ja) * | 2017-10-16 | 2022-02-01 | 富士フイルムビジネスイノベーション株式会社 | 情報処理装置、プログラム及び制御方法 |
CN107888845B (zh) | 2017-11-14 | 2022-10-21 | 腾讯数码(天津)有限公司 | 一种视频图像处理方法、装置及终端 |
JP2019109813A (ja) * | 2017-12-20 | 2019-07-04 | 京セラドキュメントソリューションズ株式会社 | 画像処理装置、画像処理方法、画像形成装置及び画像処理プログラム |
DK201870351A1 (en) | 2018-05-07 | 2020-01-13 | Apple Inc. | Devices and Methods for Measuring Using Augmented Reality |
CN109034110A (zh) * | 2018-08-17 | 2018-12-18 | 潘小亮 | 枪战片计算机分类方法 |
US10789746B2 (en) * | 2018-08-20 | 2020-09-29 | The Lash Lounge Franchise, Llc | Systems and methods for creating custom lash design |
US10785413B2 (en) * | 2018-09-29 | 2020-09-22 | Apple Inc. | Devices, methods, and graphical user interfaces for depth-based annotation |
CN111324274A (zh) * | 2018-12-13 | 2020-06-23 | 北京京东尚科信息技术有限公司 | 虚拟试妆方法、装置、设备及存储介质 |
US11227446B2 (en) | 2019-09-27 | 2022-01-18 | Apple Inc. | Systems, methods, and graphical user interfaces for modeling, measuring, and drawing using augmented reality |
US11138771B2 (en) | 2020-02-03 | 2021-10-05 | Apple Inc. | Systems, methods, and graphical user interfaces for annotating, measuring, and modeling environments |
US11727650B2 (en) | 2020-03-17 | 2023-08-15 | Apple Inc. | Systems, methods, and graphical user interfaces for displaying and manipulating virtual objects in augmented reality environments |
US11690435B2 (en) | 2020-07-07 | 2023-07-04 | Perfect Mobile Corp. | System and method for navigating user interfaces using a hybrid touchless control mechanism |
US20220232951A1 (en) * | 2021-01-28 | 2022-07-28 | Gil Joseph Laks | Interactive storage system, apparatus, and related methods |
US11941764B2 (en) | 2021-04-18 | 2024-03-26 | Apple Inc. | Systems, methods, and graphical user interfaces for adding effects in augmented reality environments |
CN114581291A (zh) * | 2022-03-04 | 2022-06-03 | 合众新能源汽车有限公司 | 座舱内呈现人脸美妆图像的方法和系统 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001357404A (ja) * | 2000-06-14 | 2001-12-26 | Minolta Co Ltd | 画像抽出装置 |
JP2005216131A (ja) * | 2004-01-30 | 2005-08-11 | Digital Fashion Ltd | 化粧シミュレーション装置、化粧シミュレーション方法、化粧シミュレーションプログラム |
JP2006313223A (ja) * | 2005-05-09 | 2006-11-16 | Konica Minolta Photo Imaging Inc | 撮影装置 |
JP2007049371A (ja) | 2005-08-09 | 2007-02-22 | Fujifilm Holdings Corp | デジタルカメラ及び撮影画像表示制御方法 |
JP3984191B2 (ja) | 2002-07-08 | 2007-10-03 | 株式会社東芝 | 仮想化粧装置及びその方法 |
JP2011259243A (ja) * | 2010-06-09 | 2011-12-22 | Nintendo Co Ltd | 画像処理プログラム、画像処理装置、画像処理システム、および画像処理方法 |
JP5191665B2 (ja) | 2006-01-17 | 2013-05-08 | 株式会社 資生堂 | メイクアップシミュレーションシステム、メイクアップシミュレーション装置、メイクアップシミュレーション方法およびメイクアップシミュレーションプログラム |
Family Cites Families (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4539585A (en) * | 1981-07-10 | 1985-09-03 | Spackova Daniela S | Previewer |
US6392710B1 (en) * | 1998-04-03 | 2002-05-21 | Avid Technology, Inc. | Graphical user interface for field-based definition of special effects in a video editing system |
US6649925B2 (en) * | 1999-11-26 | 2003-11-18 | Amos Talmi | Methods of calibrating a position measurement device |
US6449019B1 (en) | 2000-04-07 | 2002-09-10 | Avid Technology, Inc. | Real-time key frame effects using tracking information |
WO2005006772A1 (ja) * | 2003-07-09 | 2005-01-20 | Matsushita Electric Industrial Co., Ltd. | 画像表示装置及び画像表示方法 |
EP1710746A1 (en) * | 2004-01-30 | 2006-10-11 | Digital Fashion Ltd. | Makeup simulation program, makeup simulation device, and makeup simulation method |
US7450268B2 (en) * | 2004-07-02 | 2008-11-11 | Hewlett-Packard Development Company, L.P. | Image reproduction |
US8403852B2 (en) * | 2004-10-20 | 2013-03-26 | Kabushiki Kaisha Toshiba | Ultrasonic diagnostic apparatus and control method thereof |
CN101371272B (zh) | 2006-01-17 | 2012-07-18 | 株式会社资生堂 | 化妆模拟系统,化妆模拟装置,化妆模拟方法 |
US7634108B2 (en) * | 2006-02-14 | 2009-12-15 | Microsoft Corp. | Automated face enhancement |
KR100813978B1 (ko) * | 2006-02-22 | 2008-03-17 | 삼성전자주식회사 | 멀티미디어 데이터를 기록 및 재생하는 방법 및 장치 |
US20080024389A1 (en) * | 2006-07-27 | 2008-01-31 | O'brien-Strain Eamonn | Generation, transmission, and display of sub-frames |
JP2009064423A (ja) * | 2007-08-10 | 2009-03-26 | Shiseido Co Ltd | メイクアップシミュレーションシステム、メイクアップシミュレーション装置、メイクアップシミュレーション方法およびメイクアップシミュレーションプログラム |
US20090087114A1 (en) * | 2007-09-28 | 2009-04-02 | Advanced Micro Devices | Response Time Compression Using a Complexity Value of Image Information |
SG178569A1 (en) * | 2009-08-24 | 2012-03-29 | Singapore Health Serv Pte Ltd | A method and system of determining a grade of nuclear cataract |
US8773470B2 (en) * | 2010-05-07 | 2014-07-08 | Apple Inc. | Systems and methods for displaying visual information on a device |
WO2012008229A1 (ja) * | 2010-07-16 | 2012-01-19 | 富士フイルム株式会社 | 放射線撮像装置、放射線撮像システム、放射線撮像方法、及びプログラム |
US8717381B2 (en) * | 2011-01-11 | 2014-05-06 | Apple Inc. | Gesture mapping for image filter input parameters |
KR101223046B1 (ko) * | 2011-02-08 | 2013-01-17 | 경북대학교 산학협력단 | 정지장면의 연속프레임 영상에 기반한 영상분할장치 및 방법 |
WO2013008305A1 (ja) * | 2011-07-11 | 2013-01-17 | トヨタ自動車株式会社 | 瞼検出装置 |
DE102012107954A1 (de) * | 2011-09-02 | 2013-03-07 | Samsung Electronics Co. Ltd. | Anzeigetreiber, Betriebsverfahren davon, Host zum Steuern des Anzeigetreibers und System mit dem Anzeigetreiber und dem Host |
SG11201400446WA (en) * | 2011-09-08 | 2014-09-26 | Apn Health Llc | Automatically determining 3d catheter location and orientation using 2d fluoroscopy only |
KR20140099319A (ko) * | 2011-12-04 | 2014-08-11 | 디지털 메이크업 엘티디 | 디지털 메이크업 |
US9118876B2 (en) * | 2012-03-30 | 2015-08-25 | Verizon Patent And Licensing Inc. | Automatic skin tone calibration for camera images |
US20150212694A1 (en) * | 2012-05-02 | 2015-07-30 | Google Inc. | Internet browser zooming |
CN102799367B (zh) * | 2012-06-29 | 2015-05-13 | 鸿富锦精密工业(深圳)有限公司 | 电子设备及其触摸控制方法 |
US20140153832A1 (en) * | 2012-12-04 | 2014-06-05 | Vivek Kwatra | Facial expression editing in images based on collections of images |
US11228805B2 (en) * | 2013-03-15 | 2022-01-18 | Dish Technologies Llc | Customized commercial metrics and presentation via integrated virtual environment devices |
-
2014
- 2014-04-16 EP EP14785844.3A patent/EP2988486B1/en active Active
- 2014-04-16 WO PCT/JP2014/002170 patent/WO2014171142A1/ja active Application Filing
- 2014-04-16 US US14/784,743 patent/US9968176B2/en active Active
- 2014-04-16 CN CN201480013933.2A patent/CN105164999B/zh not_active Expired - Fee Related
- 2014-04-16 JP JP2015512315A patent/JP6264665B2/ja active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001357404A (ja) * | 2000-06-14 | 2001-12-26 | Minolta Co Ltd | 画像抽出装置 |
JP3984191B2 (ja) | 2002-07-08 | 2007-10-03 | 株式会社東芝 | 仮想化粧装置及びその方法 |
JP2005216131A (ja) * | 2004-01-30 | 2005-08-11 | Digital Fashion Ltd | 化粧シミュレーション装置、化粧シミュレーション方法、化粧シミュレーションプログラム |
JP2006313223A (ja) * | 2005-05-09 | 2006-11-16 | Konica Minolta Photo Imaging Inc | 撮影装置 |
JP2007049371A (ja) | 2005-08-09 | 2007-02-22 | Fujifilm Holdings Corp | デジタルカメラ及び撮影画像表示制御方法 |
JP5191665B2 (ja) | 2006-01-17 | 2013-05-08 | 株式会社 資生堂 | メイクアップシミュレーションシステム、メイクアップシミュレーション装置、メイクアップシミュレーション方法およびメイクアップシミュレーションプログラム |
JP2011259243A (ja) * | 2010-06-09 | 2011-12-22 | Nintendo Co Ltd | 画像処理プログラム、画像処理装置、画像処理システム、および画像処理方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2988486A4 |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016202215A1 (zh) * | 2015-06-19 | 2016-12-22 | 阿里巴巴集团控股有限公司 | 实现动态图片预览的方法和装置、表情包展示方法和装置 |
US10650486B2 (en) | 2015-06-19 | 2020-05-12 | Alibaba Group Holding Limited | Previewing dynamic images and expressions |
CN106709400A (zh) * | 2015-11-12 | 2017-05-24 | 阿里巴巴集团控股有限公司 | 一种感官张闭状态的识别方法、装置及客户端 |
CN110050251A (zh) * | 2016-12-06 | 2019-07-23 | 皇家飞利浦有限公司 | 向用户显示引导指示符 |
JP2020516984A (ja) * | 2016-12-06 | 2020-06-11 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | ガイダンス・インジケータのユーザに対する表示 |
JP7121034B2 (ja) | 2016-12-06 | 2022-08-17 | コーニンクレッカ フィリップス エヌ ヴェ | ガイダンス・インジケータのユーザに対する表示 |
CN110050251B (zh) * | 2016-12-06 | 2023-10-03 | 皇家飞利浦有限公司 | 向用户显示引导指示符 |
JP2021530031A (ja) * | 2018-07-27 | 2021-11-04 | 北京微播視界科技有限公司Beijing Microlive Vision Technology Co., Ltd | 顔に基づく特殊効果発生方法、装置および電子機器 |
US11354825B2 (en) | 2018-07-27 | 2022-06-07 | Beijing Microlive Vision Technology Co., Ltd | Method, apparatus for generating special effect based on face, and electronic device |
JP7286684B2 (ja) | 2018-07-27 | 2023-06-05 | 北京微播視界科技有限公司 | 顔に基づく特殊効果発生方法、装置および電子機器 |
Also Published As
Publication number | Publication date |
---|---|
CN105164999A (zh) | 2015-12-16 |
EP2988486A1 (en) | 2016-02-24 |
EP2988486A4 (en) | 2016-04-20 |
CN105164999B (zh) | 2018-08-10 |
JPWO2014171142A1 (ja) | 2017-02-16 |
US9968176B2 (en) | 2018-05-15 |
EP2988486B1 (en) | 2020-03-11 |
US20160058158A1 (en) | 2016-03-03 |
JP6264665B2 (ja) | 2018-01-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6264665B2 (ja) | 画像処理方法および画像処理装置 | |
US9838597B2 (en) | Imaging device, imaging method, and program | |
US20230093612A1 (en) | Touchless photo capture in response to detected hand gestures | |
RU2668408C2 (ru) | Устройства, системы и способы виртуализации зеркала | |
KR102407190B1 (ko) | 영상 촬영 장치 및 그 동작 방법 | |
CN106104650A (zh) | 经由凝视检测进行远程设备控制 | |
JP2006201531A (ja) | 撮像装置 | |
US11477433B2 (en) | Information processor, information processing method, and program | |
JP2012244196A (ja) | 画像処理装置及び方法 | |
JP2009086703A (ja) | 画像表示装置、画像表示方法、及び画像表示プログラム | |
US11579693B2 (en) | Systems, methods, and graphical user interfaces for updating display of a device relative to a user's body | |
US20230171484A1 (en) | Devices, methods, and graphical user interfaces for generating and displaying a representation of a user | |
US20150172553A1 (en) | Display device, display method, and computer-readable recording medium | |
JP2014023127A (ja) | 情報表示装置、情報表示方法、制御プログラム、および記録媒体 | |
JP2011152593A (ja) | ロボット操作装置 | |
US9088722B2 (en) | Image processing method, computer-readable recording medium, and image processing apparatus | |
US20240236474A1 (en) | Systems and methods for obtaining a smart panoramic image | |
JP2022120681A (ja) | 画像処理装置および画像処理方法 | |
US20210400234A1 (en) | Information processing apparatus, information processing method, and program | |
KR20140090538A (ko) | 디스플레이 장치 및 제어 방법 | |
JP6087615B2 (ja) | 画像処理装置およびその制御方法、撮像装置、および表示装置 | |
KR20180017897A (ko) | 스티커 영상을 위한 오브젝트 추출 방법 및 그 장치 | |
CN111031250A (zh) | 一种基于眼球追踪的重对焦的方法和装置 | |
JP6211139B2 (ja) | 画像合成装置 | |
CN112558767A (zh) | 多个功能界面处理的方法和系统及其ar眼镜 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201480013933.2 Country of ref document: CN |
|
DPE2 | Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14785844 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015512315 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14784743 Country of ref document: US Ref document number: 2014785844 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |