TW200805175A - Makeup simulation system, makeup simulation device, makeup simulation method and makeup simulation program - Google Patents

Makeup simulation system, makeup simulation device, makeup simulation method and makeup simulation program Download PDF

Info

Publication number
TW200805175A
TW200805175A TW96101598A TW96101598A TW200805175A TW 200805175 A TW200805175 A TW 200805175A TW 96101598 A TW96101598 A TW 96101598A TW 96101598 A TW96101598 A TW 96101598A TW 200805175 A TW200805175 A TW 200805175A
Authority
TW
Taiwan
Prior art keywords
image
makeup
face
user
processing
Prior art date
Application number
TW96101598A
Other languages
Chinese (zh)
Other versions
TWI421781B (en
Inventor
Yasuo Goto
Original Assignee
Shiseido Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shiseido Co Ltd filed Critical Shiseido Co Ltd
Publication of TW200805175A publication Critical patent/TW200805175A/en
Application granted granted Critical
Publication of TWI421781B publication Critical patent/TWI421781B/en

Links

Landscapes

  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

To provide a makeup simulation system, a makeup simulation device, a makeup simulation method and a makeup simulation program, accurately achieving makeup for the face of a user included in a dynamic image with small processing load. This makeup simulation system includes: a photographing means 2 for outputting a photographed dynamic image; a control means 8 for image-processing the output dynamic image and outputting the same; and a display means 7 for displaying the dynamic image output from the control means 8, wherein the control means 8 includes: a face recognition processing means for recognizing the face of a user from the dynamic image based on a predetermined tracking point; and a makeup processing means for conducting predetermined makeup for the face of a user included in the dynamic image based on the tracking point and outputting the same to the display means.

Description

200805175 九、發明說明: 【發明所屬之技術領域】 技術領域 本發明係有關於化妝模擬系統、化妝模擬裝置、化妝 模擬方法及化妝模擬程式,且特別是有關於可對動態影像 所含有的使用者臉部進行化妝之化妝模擬系統、化妝模擬 裝置、化妝模擬方法及化妝模擬程式。 I:先前技術2 背景技術 10 過去,為了化妝用的商品販賣等,已知有不須實際進 行化妝而是在電腦上模擬化妝後的臉部之技術(例如參照 專利文獻1)。然而,專利文獻1係以靜止晝面顯示模擬結 果,故,無法輕易地確認化妝後的使用者表情變化時之臉 部。因此,已開發出在可捕捉使用者表情變化之動態影像 15内楔擬化妝之技術(例如參照專利文獻2)。 然而,專利文獻2所揭示之化妝模擬裝置係在使用者表 情變化與口部及兩眼對應之像素領域進行特定,並藉由模 板匹配來追蹤該像素領域,而算出進行臉部化妝之化妝領 域(例如參照段落〔 0028〕)。如此,在使用者表情變化與口 2〇邛及兩眼對應之像素領域進行追縱,會造成電腦處理上的 負擔大、難以於閉目時等情形進行正確的對應處理之問題。 【專利文獻1】日本專利公開公報第20(^-346627號 【專利文獻2】日本專利公開公報第2〇〇3_44837號 【明内】 5 200805175 、 發明欲解決之問題 本發明係有鑒於前述問題點而作成者,且其目的在於 提供可於處理負擔小的情形下,正確地對動態影像所含有 的使用者臉部進行化妝之化妝模擬系統、化妝模擬裝置、 5化妝模擬方法及化妝模擬程式。 解決問題之手段 用以解決前述問題,本發明係提供一種可對使用者臉 部拍攝後的動態影像進行化妝之化妝模擬系統,包含有: 攝影機構,係可拍攝使用者臉部並輸出動態影像者;控制 10機構,係可接收從前述攝影機構輸出之動態影像,並對前 述動態影像進行影像處理且輸出者;及顯示機構,係可顯 示從前述控制機構輸出之動態影像者,又,前述控制機構 更包3有·臉部辨識處理機構,係根據預定追蹤點從前述 動態影像辨識使用者臉部者;及化妝處理機構,係根據前 I5述追蹤點對前述動態影像所含有的使用者臉部進行預定化 妝,並輸出至前述顯示機構者。 本發明係設有:臉部辨識處理機構,係根據預定追蹤 點從動態影像辨識使用者臉部者;及化妝處理機構,係根 據追蹤點對動態影像所含有的使用者臉部進行預定化妝, 20並輪出至顯示機構者,藉此,可於處理負擔小的情形下, 根據預定追蹤點從動態影像辨識使用者臉部,並根據該追 蹤點正確地對動態影像所含有的使用者臉部進行化妝。 又,將本發明之構成要件、表現或構成要件之任意组 合適用於方法、裝置、系統、電腦程式、記錄媒體、諸 6 200805175 構造等當中者亦可作為本發明的有效型態。 發明之效果 根據本發明,可提供可於處理負擔小的情形下,正確 地對動態影像所含有的使用者臉部進行化妝之化妝模擬系 5統、化妝模擬裝置、化妝模擬方法及化妝模擬程式。 圖式之簡單說明 第1圖係本發明化妝模擬裝置的第一實施例之外觀圖。 第2圖係化妝模擬裝置的第一實施例之截面圖。 第3圖係本發明化妝模擬裝置的第二實施例之外觀圖。 10 第4圖係化妝模擬裝置的第二實施例之截面圖。 第5圖係本發明化妝模擬裝置的第三實施例之外觀圖。 第6圖係化妝模擬裝置的第三實施例之截面圖。 第7圖係本發明化妝模擬襞置的第四實施例之外觀圖。 第8圖係化妝模擬裝置的第四實施例之截面圖。 15 第9圖係本發明化妝模擬裝置的第五實施例之外觀圖。 第10圖係化妝模擬裝置的第五實施例之截面圖。 第11圖係化妝模擬裝置的第五實施例之硬體構造圖。 第12圖係顯示化妝模擬裝置所進行的處理概要之流程 圖。 20 f 13圖係顯示於顯示器之主晝面及顯示於操作面板之 操作晝面的一例之影像圖。 第14圖係顯示化妝模擬裝置進行化妝模擬以外的處理 之影像圖。 第15圖係本發明化妝模擬系統的其中一實施例之系統 7 200805175 構造圖。 第16圖係顯示模擬主應用所進行的處理之晝面影像 圖。 第17圖係化妝前影像所含有的使用者臉部之影像晝 5 面。 “ 第18圖係顯示追蹤點的一例之影像圖。 第19圖係化妝處理參數檔的一例之構成圖。 第20圖係顯示口紅處理所參照的追蹤點之影像圖。 馨 第21圖係顯示口紅處理的一例之流程圖。 10 第2 2圖係顯示輪廓取得處理的一例之影像圖。 第23圖係輪廓取得用影像600之唇部8點及鼻子3點之 追蹤點,與以唇部8點及鼻子3點之追蹤點為基礎進行再搜 尋而取得的輪廓取得用影像601之點的比較晝面。 第24圖係顯示從唇部8點及鼻子3點之追蹤點求得預設 15 點的處理之一例之影像圖。 第25圖係顯示根據點或預設點完成輪廓之處理之影像 ⑩ 圖。 第26圖係顯示上色圖產生處理的一例之影像圖。 ^ 第27圖係顯示根據上色圖的上色處理之一例之影像 V 20 圖。 第28圖係顯示眼影處理所參照的追蹤點之影像圖。 第29圖係顯示眼影處理的一例之流程圖。 第30圖係顯示基本輪廓產生處理的一例之影像圖。 第31圖係顯示上色輪廓產生處理的一例之影像圖。 8 200805175 處理之係顯7F未進行淡化處理之上色W '及進行淡化 ^色圖的一例之影像圖。 弟 3 3 圖 二 D. # “.、、、員不臉頰處理所參照的追蹤點之影像圖。 =34圖係顯示臉頰處理的一例之流程圖。 ^35圖係顯不上色輪廓的-例之影像圖。 :36圖係顯示眉處理所參照的追蹤點之影像圖。 :37圖係顯示眉處理的一例之流程圖。200805175 IX. DESCRIPTION OF THE INVENTION: TECHNICAL FIELD The present invention relates to a makeup simulation system, a makeup simulation device, a makeup simulation method, and a makeup simulation program, and more particularly to a user who can be included in a motion picture. Make-up makeup simulation system, makeup simulation device, makeup simulation method, and makeup simulation program for the face. I. Prior Art 2 Background Art In the past, a technique for simulating a face after makeup on a computer without actually applying makeup has been known (for example, see Patent Document 1). However, in Patent Document 1, the simulation results are displayed on a stationary side, and therefore, it is not possible to easily confirm the face when the user's expression after makeup is changed. Therefore, techniques for wedge-preparing in a moving image 15 capable of capturing changes in user expression have been developed (for example, refer to Patent Document 2). However, the makeup simulation device disclosed in Patent Document 2 is specified in the pixel field corresponding to the user's expression change and the mouth portion and the two eyes, and the pixel field is tracked by template matching, and the makeup field for performing face makeup is calculated. (For example, refer to paragraph [0028]). In this way, when the user's expression changes and the pixel area corresponding to the two eyes and the two eyes are tracked, there is a problem that the computer processing burden is large and it is difficult to perform correct correspondence processing when the eyes are closed. [Patent Document 1] Japanese Patent Laid-Open Publication No. Hei. No. Hei. No. Hei. No. Hei. No. Hei. No. Hei. The purpose of the present invention is to provide a makeup simulation system, a makeup simulation device, a makeup simulation method, and a makeup simulation program that can accurately apply makeup to a user's face contained in a moving image when the processing load is small. Means for Solving the Problems In order to solve the foregoing problems, the present invention provides a makeup simulation system capable of applying makeup to a moving image after a user's face is photographed, including: a photographing mechanism that can photograph a user's face and output dynamics. The image controller; the control 10 mechanism, which can receive the dynamic image outputted from the camera mechanism, and perform image processing on the motion image and output the image; and the display mechanism can display the motion image outputted from the control mechanism, and The foregoing control mechanism further includes a face recognition processing mechanism, which is based on the predetermined tracking point from the foregoing dynamic And a makeup processing mechanism that performs predetermined makeup on the face of the user included in the moving image according to the tracking point in the above I5, and outputs the result to the display mechanism. The present invention is provided with: a face The part identification processing mechanism identifies the user's face from the motion image according to the predetermined tracking point; and the makeup processing mechanism performs predetermined makeup on the user's face contained in the motion image according to the tracking point, 20 and rotates to the display mechanism Therefore, in a case where the processing load is small, the user's face is recognized from the moving image based on the predetermined tracking point, and the user's face included in the motion image is correctly applied according to the tracking point. Any combination of the constituent elements, performances, or constituent elements of the present invention can be applied to a method, an apparatus, a system, a computer program, a recording medium, and the like. The structure of the present invention can also be used as an effective form of the present invention. It can provide the makeup of the user's face contained in the motion image correctly under the condition that the processing load is small. The makeup simulation system 5, the makeup simulation device, the makeup simulation method, and the makeup simulation program. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is an external view of a first embodiment of the makeup simulation device of the present invention. Fig. 3 is an external view of a second embodiment of the makeup simulation device of the present invention. Fig. 4 is a cross-sectional view showing a second embodiment of the makeup simulation device. Fig. 5 is a makeup simulation of the present invention. Fig. 6 is a cross-sectional view showing a third embodiment of the makeup simulation device. Fig. 7 is an external view showing a fourth embodiment of the makeup simulation device of the present invention. A cross-sectional view of a fourth embodiment of the simulation apparatus. Fig. 9 is an external view of a fifth embodiment of the makeup simulation apparatus of the present invention. Fig. 10 is a cross-sectional view showing a fifth embodiment of the makeup simulation apparatus. Fig. 11 is a view showing a hardware configuration of a fifth embodiment of the makeup simulation device. Fig. 12 is a flow chart showing an outline of processing performed by the makeup simulation device. The 20 f 13 image is an image of an example of the main surface of the display and an operation panel displayed on the operation panel. Fig. 14 is a view showing an image of a process other than makeup simulation performed by the makeup simulation device. Figure 15 is a system 7 of one embodiment of the makeup simulation system of the present invention. Figure 16 shows a topographical image of the processing performed by the simulated main application. Figure 17 shows the image of the user's face contained in the pre-makeup image. Fig. 18 is an image diagram showing an example of a tracking point. Fig. 19 is a configuration diagram of an example of a makeup processing parameter file. Fig. 20 is an image diagram showing a tracking point referred to by lipstick processing. Flowchart of an example of the lipstick processing. 10 Fig. 2 is an image diagram showing an example of the contour acquisition processing. Fig. 23 is a tracking point of the lip portion 8 point and the nose 3 point of the contour acquisition image 600, and the lip portion The comparison of the points of the contour acquisition image 601 obtained by re-searching at 8 o'clock and the nose 3 o'clock tracking point. Fig. 24 shows the preset from the tracking point of 8 points of the lip and 3 points of the nose. The image of one of the 15 points is processed. Fig. 25 is an image showing the image of the contour processed according to the point or the preset point. Fig. 26 is an image showing an example of the processing of the coloring map. ^ Figure 27 An image V 20 image showing an example of coloring processing according to a color map is shown. Fig. 28 is a view showing an image of a tracking point referred to by eye shadow processing. Fig. 29 is a flowchart showing an example of eye shadow processing. Display basic contour generation processing Fig. 31 is an image diagram showing an example of the process of generating a color outline. 8 200805175 The image of the process is shown in Fig. 7F, which is not subjected to the desalination process, and an image of an example of the faded color map. 3 3 Figure 2 D. # ".,,, and the image of the tracking point that the staff does not refer to. The =34 figure shows a flow chart of an example of cheek processing. The ^35 image shows an image of the outline that is not colored. The :36 figure shows the image of the tracking point referenced by the eyebrow processing. The :37 figure shows a flow chart of an example of eyebrow processing.

10 15 18圖係顯示眉輪廓取得處理的一例之影像圖。 弟39圖係顯示與變化指定對應之指示曲線製成處理的 一例之影像圖。 f40圖係顯示粉底處理所參照的追蹤點之影像圖。 第1圖係顯示粉底處理的一例之流程圖。 第42圖係顯示輪廓的一例之影像畫面。 皆Λ 圖係顯示淡化處理前的原本影像、及淡化處理後 的原本影像之-例之影像圖。10 15 18 The image shows an image of an example of the eyebrow contour acquisition process. The figure 39 shows an image of an example of the process of creating an indication curve corresponding to the change designation. The f40 image shows an image of the tracking point referenced by the foundation treatment. Fig. 1 is a flow chart showing an example of foundation treatment. Fig. 42 is an image screen showing an example of a contour. The image shows the original image before the desalination process and the image of the original image after the desalination process.

【貧施方式】 實施發明之最佳型態 接著,根據以下實施例並參照圖式逐一說明用以實施 本發明之最佳型態。第1圖係本發明化妝模擬裝置的第一實 20施例之外觀圖。化妝模擬裝置1包含有照相機2、操作面板 4、列印機5、照明6及顯示器7。 照相機2可拍攝站在化妝模擬裝置丨前方之使用者並輸 出動態影像。操作面板4可顯示操作影像,並在接收使用者 之操作後輸出操作資訊。列印機5可印刷顯示於顯示器之影 9 200805175 像(如化妝後的影像畫面等)及資訊(如用以化妝成影像晝面 之商品資訊等)。照明6可在如模擬開始之後調整光度。 第2圖係化妝模擬裝置的第一實施例之截面圖。化妝模 擬衣置1包含有照相機2、操作面板4、列印機5、顯示器7及 5 控制部8。 照相機2可將拍攝的動態影像輸出至控制部8。操作面 板4可顯示從控制部8輸出的操作晝面,並在接收使用者之 #作後將操作資訊輸出至控制部8。歹,j印機5可印刷從控制 部8輸出之影像及資訊。顯示器7可顯示從控制部8輸 出之動 態影像(主影像)。控制部8可接收從照相機2輸出之動態影 像,並藉由對該動態影像進行後述之影像處理 ,來對動態 影像所含有的使用者臉部進行化妝並輸出至顯示器7。 第1圖及第2圖之化妝模擬裳置1係以化妝時不可或缺 的項目之鏡部為概念,而具有與使用者間的互動之機能。 15即,化_«置丨具有可讓制者錢宛如照鏡子般地自 然化妝之特徵。 化妝杈擬裝置1係藉控制部8對照相機2輸出之動態影 像等進行衫像處理,且對動態影像所含有的使用者臉部進 订化妝H於作為數位式鏡部之顯示器7。化妝模擬裝 2〇置1可將各種商品資訊、美容資訊及使用者臉部進行化妝後 的影像晝面顯示於顯示器7。 又第3圖係、本發明化妝模擬裝i的第二實施例之外觀 圖又相同於第1圖之部分係附上相同的元件符號。 第3圖之化妝板擬裝置w含有操作面板4、列印機5、 200805175BEST MODE FOR CARRYING OUT THE INVENTION Next, the best mode for carrying out the invention will be described one by one according to the following examples and with reference to the drawings. Fig. 1 is an external view showing a first embodiment of the makeup simulation device of the present invention. The makeup simulation device 1 includes a camera 2, an operation panel 4, a printer 5, an illumination 6, and a display 7. The camera 2 can photograph a user standing in front of the makeup simulation device and output a moving image. The operation panel 4 can display an operation image and output operation information after receiving the user's operation. The printer 5 can print the image displayed on the display. 9 200805175 Image (such as image after makeup) and information (such as product information used to make up the image). Illumination 6 can adjust the luminosity as soon as the simulation begins. Fig. 2 is a cross-sectional view showing a first embodiment of the makeup simulation device. The makeup set 1 includes a camera 2, an operation panel 4, a printer 5, displays 7, and a control unit 8. The camera 2 can output the captured motion picture to the control unit 8. The operation panel 4 can display the operation surface outputted from the control unit 8, and output the operation information to the control unit 8 after receiving the user's operation. Then, the printer 5 can print images and information output from the control unit 8. The display 7 can display a dynamic image (main image) output from the control unit 8. The control unit 8 can receive the moving image output from the camera 2, and perform image processing to be described later on the moving image to apply makeup to the user's face included in the moving image and output it to the display 7. The makeup simulation set 1 of Fig. 1 and Fig. 2 is a concept of a mirror portion of an item that is indispensable for makeup, and has a function of interaction with a user. 15 means that the _« 丨 has the characteristics that the maker's money is naturally dressed like a mirror. In the makeup simulation device 1, the control unit 8 performs a shirt image processing on a moving image or the like output from the camera 2, and applies a makeup H to the user's face included in the moving image to the display 7 as a digital mirror unit. The makeup simulation device 2 can display various product information, beauty information, and the image of the user's face after makeup on the display 7. Further, the third embodiment of the second embodiment of the present invention is the same as that of the first embodiment. The makeup board w device of FIG. 3 includes an operation panel 4, a printer 5, 200805175

侧之光,並透射過來自化妝模擬裝置丨内侧之光。The side light is transmitted through the light from the inside of the makeup simulation device.

。透明板9可騎過來自化妝模《置伙 半鏡(半透明鏡)3可反射騎光,並透射過照射光的一 部份。照相機2係設置在可透過半鏡3、透明板9拍攝站在化 _ ㈣莫擬裝置1前方的使用者之位置上,且照相機2設在使用 10者之視線高度。照相機2可透過半鏡3、透明板9拍攝站在化 妝模擬裝置1前方的使用者,並輸出動態影像。 顯示器7係設置在可透過半鏡3、透明板9看見站在化妝 模擬裝置1前方的使用者之位置上。從顯示器7輪出之光係 藉由半鏡3反射,並透過透明板9輸出至化妝模擬裝置^^卜 U側。因此,使用者可從化妝模擬裝置〇卜側看見顯雜顯示 器7之動態影像。 * 第351及第4圖之化妝模擬裝置1係以化妝時不可或缺 的項目之鏡部為概念,而具有與使用者間的互動之機能。 即,化妝模擬裝置1具有可讓使用者感覺宛如照鏡子般地自 20 然化妝之特徵。 化妝模擬裝置1係將照相機2設在視線高度之位置, 故,比起實施例1之照相機2位置,可更加自然地拍攝站在 化妝模擬裝置1前方的使用者之臉部。 化妝模擬裝置1係藉控制部8對照相機2輪出之動態影 11 200805175 像等進行影像處理,且對動態影像所含有的使用者臉部進 行化妝,並齡於作為數位式鏡部之齡器7。化妝模擬裝 置1可將各種商品㈣、美容資訊及使用者臉部進行化妝後 的影像畫面顯示於顯示器7。 5 又,第5圖係本發明化妝模擬裝置的第三實施例之外觀 圖。又,相同於前圖之部分係附上相同的元件符號。第5圖 之化妝模擬裝置1包含有照相機2、半鏡3、操作面板4、列 印機5及照明6。 半鏡3可透射過來自光亮處的光之一部份(如5〇%),且 10可將其餘者(如50%)反射至其正前方。由於黑暗處無光,故 半鏡3既不會透射也不會反射來自黑暗處之光。 照明6係在化妝模擬開始之後,調整光度使半鏡3的顯 不器7侧變亮。因此,設在顯示器7的顯示方向側之半鏡3係 在化妝模擬開始之前,反射來自使用者側(化妝模擬裝置1 15 外侧)之光而發揮作為鏡部之機能。 半鏡3係在化妝模擬開始之後,透射過來自顯示器7側 (化妝模擬裝置1内側)之光而發揮作為玻璃之機能。因此, 使用者可透過半鏡3看見顯示於顯示器7之動態影像。 第6圖係化妝模擬裝置的第三實施例之截面圖。第6圖 2〇之化妝模擬裝置1包含有照相機2'半鏡3、操作面板4、列 印機5、顯不器7、控制部8。又,相同於前圖之部分係附上 相同的元件符號。顯示於顯示器7之動態影像係透射過發揮 作為玻璃機能之半鏡3,且使用者可從化妝模擬裝 置1外侧 看見顯示於顯示器7之動態影像。 12 200805175 第5圖及第6圖之化妝模擬裝置丨係以化妝時不可或缺 的項目之鏡部為概念,而具有與使用者間的互動之機能。 即’化妝模擬裝置丨具有可讓使用者感覺宛如照鏡子般地自 然化妝之特徵。 5 化妝模擬裝置1係藉控制部8對照相機2輸出之動態影 像等進行影像處理,且對動態影像所含有的使用者臉部進 行化妝,並顯示於作為數位式鏡部之顯示器7。化妝模擬裝 置1可將各種商品資訊、美容資訊及使用者臉部進行化妝後 的影像晝面顯示於顯示器7。 1〇 又,第7圖係本發明化妝模擬裝置的第四實施例之外觀 圖。第8圖係化妝模擬裝置的第四實施例之截面圖。又,相 同於前圖之部分係附上相同的元件符號。化妝模擬裝置!包 含有照相機2、列印機5、照明6、觸碰式面板顯示器15。 第7圖及第8圖之化妝模擬裝置1係在以觸碰式面板顯 15示器15代替操作面板4及顯示器7之方面,與第1圖及第2圖 之化妝模擬裝置1不同。觸碰式面板顯示器15可發揮作為操 作面板4及顯示器7之機能。 觸碰式面板顯示器15可顯示操作影像,並在接收使用 者之操作後輸出操作資訊。觸碰式面板顯示器15可顯示從 20控制部8輸出之操作影像,並在接收使用者之操作後將操作 資訊輸出至控制部8。觸碰式面板顯示器15可顯示從控制部 8輸出之動態影像(主影像)。控制部8可接收從照相機2輸出 之動態影像,並藉由對該動態影像進行後述之影像處理, 來對動態影像所含有的使用者臉部進行化妝並輸出至觸碰 13 200805175 式面板顯示器15。 第7圖及第8圖之化妝模擬裝置g以化妝時不可或缺 的項目之鏡部為概念,而具有與使用者間的互動之機能。 即’化妝輪裝置1財可讓使用者錢宛如照鏡子般地自 5 然化妝之特徵。 …化妝杈擬裝置1係藉控制部8對照相機2輸出之動態影 像等進仃影像處理,且對動態影像所含有的使用者臉部進 灯化妝,並顯示於作為數位式鏡部之觸碰式面板顯示器 15。化妝模擬裝置1可將各種商品資訊、美容資訊及使用者 10臉部進行化妝後的影像畫面顯示於觸碰式面板顯示器15。 如第9圖及第10圖所示,第四實施例之化妝模擬裝置丄 亦可設有如陳列有供使用者試用的測試用商品(以下,稱為 測試品)之陳列箱,並作為自售用框體利用。第9圖係本發 明化妝模擬裝置的第五實施例之外觀圖。第1〇圖係本發明 15化妝模擬農置的第五實施例之截面圖。又,與前圖相同之 部分係附上相同的元件符號。 化妝模擬裝置1包含有照相機2、列印機5、照明6、觸 碰式面板顯示器15、陳列箱16及1C標籤讀寫器17。第9圖及 第10圖之化妝模擬裝置1係在含有陳列箱16及1C標籤讀寫 20 器17方面,與第7圖及第8圖之化妝模擬裝置1不同。 陳列箱16係陳列有多數測試品者。又,測試品附有 IC(RFID)標籤。測試品所添附的1C標籤儲存有可識別測試 品之識別資訊。藉由使用者從陳列箱16取出測試品並使之 接近1C標籤讀寫器17,1C標籤讀寫器17即可根據1C標籤讀 14 200805175 取測試品之識別資訊。 1C標籤讀寫器17可將讀取自1(:標籤之測試品的識別資 訊發送至控制部8。控制部8可接收從照相機2輪出之動態影 像,並藉由對該動態影像進行後述之影像處理,且使用與 5項取自標籤的識別資訊對應之測試品,將動態影像所含 有的使用者臉部進行化妝後之影像晝面輸出至觸碰式面板 顯示器15。 又’使測試品與識別資訊對應之對應表可設置於化妝 模擬裝置1,亦可設置於可透過網路從化妝模擬裝置i取得 10 之其他裝置。 又弟9圖及第10圖之化妝模擬裝置1係顯示在識別測 試品方面使用1C標籤之例,但亦可使用條碼、二元編碼、 標號(label)等。再者,第9圖及第1〇圖之化妝模擬裝置i亦可 事先於陳列箱16設置可辨識取出的測試品之結構(如利用 15光感測器等感測器之位置辨識等),涵陳列箱16通知控制 部8。 第9圖及第1〇圖之化妝模擬裝置i係以化妝時不可或缺 的項目之鏡部為概念,而具有與使用者間的互動之機能。 即’化妝模擬裝置1具有可讓使用者感覺宛如照鏡子般地自 20 然化妝之特徵。 化妝模擬裝置丨係藉控制部8對從照相機2輸出之動態 〜像等進行影像處理,並將使用者使用從陳列箱16選擇的 測忒πα,於動悲影像所含有的使用者臉部進行化妝之影像 晝面’顯不於作為數位式鏡部之觸碰式面板顯示器15。化 15 200805175 妝模擬裝置1可將各種商品資訊、美容資訊及使用者臉部進 行化妝後的影像晝面顯示於觸碰式面板顯示器15。藉由使 用者取得從陳列箱16選擇的測試品之標籤,化妝模擬裝置1 即可取得使用者的嗜好等資料。 5 又,若是設有可陳列如測試品加上商品的架子之化妝 模擬裝置1的話,則可藉由事先於架子陳列顯示於觸碰式面 板顯示1§ 15之商品,來有效地作為自售用框體利用。 以下,以第一實施例之化妝模擬裝置丨為例進行說明。 第11圖係化妝模擬裝置的其中一實施例之硬體構造圖。 10又,相同於第1圖及第2圖之部分係附上相同的元件符號。 第11圖之化妝模擬裝置1包含有分別透過匯流排B相互 連接之照相機2、操作面板4、列印機5、顯示器7、運算處 理裝置10、記憶裝置11、驅動裝置12及輔助記憶裝置13。 又,第11圖之運算處理裝置10、記憶裝置11、驅動裝置12 15及輔助記憶裝置13係構成第2圖之控制部8。 本發明之化妝模擬程式係可控制化妝模擬裝置丨的各 種程式之其中至少一部份。化妝模擬程式係藉由如記錄媒 體14之發送而提供者。 又’ 錄有化妝模擬程式之記錄媒體可使用如 20 CD-ROM、軟碟、光碟等可光學性、電性或磁性記錄資訊 之記錄媒體;及如ROM、快閃記憶體等可電性記錄資訊之 半導體記憶體等各類記錄媒體。 又,當記錄有化妝模擬程式之記錄媒體14設置於驅動 裝置12時,化妝模擬程式即可透過驅動襞置12從記錄媒體 16 200805175 i4安裝至_記憶裝—。卿記職置i何儲存安裝後 的化妝模擬程式,並儲存所需播案、資料等。記憶裝置η 可在啟動時從輔助記憶裝置13讀出化妝模擬程式並且儲 存。且,運算處理裝置10可依照儲存於記憶裝置η之化妝 5模擬程式,實現後述各種處理。 第12圖係顯示化妝模擬裝置所進行的處理概要之其中 一實施例之流程圖。第13圖係顯示於顯示器之主晝面及顯 示於^木作面板之操作畫面的一例之影像圖。第13圖之晝面 衫像1〇〇〜ill係顯示於顯示器7之主畫面。畫面影像2〇〇〜幻〇 10係顯示於操作面板4之操作畫面。 控制部8可持續接收照相機2所拍攝的動態影像。此 時,控制部8於顯示器、7顯示畫面影像1〇〇,並於操作面板4 顯示晝面影像200。畫面影像100及2〇〇係顯示螢幕保護器之 例。 15 進行步驟81 ’控制部8繼續判定所接收的動態影像是否 含有使用者臉部,並在控制部8辨識動態影像含有使用者臉 部之前重複進行步驟S1之處理(在S1為No)。 當辨識動態影像含有使用者臉部時(在81為丫以),控制 部8繼續步驟S2,並啟動含有化妝模擬進行用的化妝模擬程 2〇式之軟體。此時,控制部8於顯示器7顯示畫面影像101,並 於操作面板4顯示晝面影像201。 晝面影像101係顯示以照相機2拍攝使用者臉部的動態 象之例。晝面影像201係顯示橫向移動的歡迎註解之例。 繼續步驟S3,控制部8根據在步驟S2啟動的軟體開始進 17 200805175 行化妝模擬。此時,控制部8於顯示器7顯示畫面影像1〇2, 並於操作面板4顯示晝面影像202。畫面影像1〇2係顯示於照 相機2拍攝使用者臉部後的動態影像進行魔法鏡演出之 例。畫面影像202係與晝面影像201同樣顯示橫向移動的歡 5 迎註解之例。 繼續步驟S4,控制部8如後所述地進行化妝模擬。此 %,控制部8依序於頒示器7顯示畫面影像1Q3〜ι〇6,並於操 作面板4顯示晝面影像203〜206。 晝面影像103〜106係顯示藉由化妝模擬進行4種化妝類 1〇型(影像)的使用者臉部之例。畫面影像203〜206係顯示此時 顯現於顯示器7的晝面影像103〜106之化妝類型内容(如名 稱等)之例。在經過預定時間、或使用者碰觸操作面板4之 刚,控制部8會於顯示器7依序顯示畫面影像1〇3〜1〇6,並於 操作面板4顯示畫面影像203〜206。 15 當經過預定時間、或使用者碰觸操作面板4時,控制部 8便繼續步驟S5,並於顯示器7顯示畫面影像1〇7、於操作面 板4顯示晝面影像207。畫面影像1〇7係與晝面影像1〇1同樣 "員示以知相機2拍攝使用者臉部的動態影像之例。畫面影像 207係顯示可從4種化妝類型(影像)中選擇丨種影像之影像選 擇旦面之例。使用者可藉由操作該操作面板4,從影像選擇 畫面中選擇1種影像。 繼續步驟S6,控制部8在從影像選擇畫面中選擇〗種影 像之前重複進行步驟S6之處理(在弘為^^)。當使用者從影 像選擇晝面中選擇1種影像時,操作面板4即接收使用者之 18 200805175 才呆作並將操作資訊輸出至控制部8。 當判定使用者從影像選擇晝面中選擇1種影像時(在S6 為Yes),控制部8即繼續步驟S7,並將所選擇的影像之影像 畫面顯示於顯示器7及操作面板4。此時,控制部8係於顯系 5裔7鮮員不晝面影像108,並於操作面板4顯示晝面影像。 晝面影像108係根據從影像選擇畫面中選擇的1種影 像’依序顯不進行不同色彩類型化妝後的使用者臉部之影 像畫面、及用以化妝成影像晝面之商品資訊之例。畫面影 像208係_碰影像選擇畫面中選擇的丨種影像内容、及用 !〇以化妝成此時顯示於顯示器7的晝面影像1〇8之商品資訊之 例。 又,使用者亦可藉由操作該操作面板4而指示印出。當 接收到使用者之印出指不時,控制部8即於顯示器7顯示畫 面衫像109,亚於钿作面板4顯示晝面影像2〇9。晝面影像 15係顯示印出的影像畫面之例。晝面影像2〇9係顯示印出者中 的讀之例㈣8可控制列印機5並將顯示於顯示器7之 影像晝面印出。 又,使用者亦可藉由操作該操作面板4 ,指示由化妝前 及化妝後的影像畫面構成之比較畫面的顯示及印出。當接 20收到使用者之比車乂旦面的顯示指示時,控制部8即於顯示器 7顯示畫面影像U0。晝面影像㈣係顯示由化妝前及化妝後 的影像畫面構成之比較畫面之例。在顯示比較畫面之狀態 下接收使用者之印出指示時,控制部8即控制列印機5並將 顯示於顯示器7之比較晝面印出。 19 200805175 田使用者之化妝模擬結束時,控制部8即於顯示器7及 刼作面板4顯示作為螢幕保護器之晝面影像111及210,並結 束處理。 又,第12圖及第13圖之例係顯示藉由化妝模擬之4種影 5像,但亦可顯示4種影像以外之影像。又,化妝模擬裝t 係顯示從照相機2輸出的動態影像所含有的使用者臉部進 仃化妝之例,但亦可將事先拍攝的動態影像作為動態影像 播預先記憶於輔助記憶裝置13等,再對該動態影像樓所含 有的使用者臉部進行化妝。 1〇 化妝模擬裝置1亦可將照相機2輸出的動態影像利用在 化妝模擬以外之方面。第14圖係顯示化妝模擬裝置進行化 妝模擬以外的處理之影像圖。 晝面影像300係顯示照相機2拍攝的使用者臉部之動態 影像之例。化妝模擬裳置1可辨識動態影像所含有的使用者 15臉部,並切出使用者臉部之靜止畫面3(U。 化妝模擬裝置1可藉由臉縣析邏輯及肌膚分析邏輯 來進行靜止晝面3〇1之臉型診斷及膚色診斷,並將顯示該結 果之畫面影像302顯示於顯示器7。又,藉由臉型分析邏輯 ,肌膚分析邏輯來進行靜止畫面3G1之臉型診斷及膚色診 20斷之處理,得、已揭示於如日本專利公開公報第麵以㈣ 號之周知技術。 又,化妝模擬裝置1亦可於顯示器7顯示如晝面影像3〇3 之課程選擇畫面303,並讓使用者選擇各種課程(如潮流、 基礎、自由)。化妝模擬襄置!係根據使用者選擇之課程將 20 200805175 畫面影像304〜309顯示於顯示器7,並進行模擬及建議。 畫面影像304〜306係顯示各課程的模擬畫面之例。畫面 影像307〜309係顯示各課程的建議畫面之例。 例如,基礎課程表示根據臉型診斷及膚色診斷結果, 5模擬及建議最適當之化妝技術。又,潮流課程表示最新潮 流化妝之模擬及建議。此外,自由化妝課程表示與眼睛、 嘴巴、臉頰、眉毛各個部位對應之項目的模擬及建議。 在顯示模擬晝面或建議畫面之狀態下接收到使用者之 印出指示時,控制部8亦可控制列印機5將顯示於顯示器7之 10 模擬畫面或建議晝面印出。 接著’詳細說明用以實現前述化妝模擬裝置丨之化妝模 擬系統。第15圖係本發明化妝模擬系統之其中一實施例之 系統構造圖。 第15圖之化妝模擬系統2〇包含有類比照相機21、υ$Β 15擷取元件22、動態影像檔23、化妝照相機24、靜止畫面系 統25、動悲影像檔26、共用記憶體27、模擬器主應用程式 28及介面應用程式29。 類比照相機21係經由USB擷取元件22輸出拍攝的例如 NTSC方式之動態影像。從USB擷取元件22輸出之動態影像 20係利用如作為API(應用程式介面)一例之胸㈣i而輸入 至模擬态主應用程式28。又,動態影像檔23亦可利用 DirectX31而輸入至模擬器主應用程式。 化妝照相機24係經由作為串列介面的一例之mEE1394 而輸出拍攝的動態影像。從化妝照相機24輸出之動態影像 21 200805175 係利用專用API32而輸入至模擬器主應用程式28,且模擬器 主應用程式28可從利料用APm輸人之㈣影像得到動 態影像用解析度之原本影像及靜止畫面用解析度之靜止影 5 &擬$主應帛程^ 28細彻Dii^ctX31輸人之動態影 像與動態影像槽23、及得自於利用專用Api32輸入的動態影 像之動態影像用解析度的動態影像作為原本影像,並對該 原本影像進行修整(trimming)及縮小處理。 模擬為主應用程式28係對原本影像進行修整並得到化 10妝丽影像。此外,模擬器主應用程式28係對原本影像進行 縮小處理並得到臉部辨識處理用影像。臉部辨識處理部% 係藉由FFT(咼速傅立葉變換)而從臉部辨識處理用影像得 到後述用以辨識使用者臉部之追蹤點34。 化妝處理部35係根據追蹤點34而得到化妝前影像所含 15有的使用者臉部進行粉底、眉毛、眼影、口紅,、臉頰等化 妝後之化妝後影像。化妝處理部35包含有粉底處理部36、 眉處理部37、眼影處理部38、口紅處理部39、臉頰處理部 40 ° 又,化妝處理部35可使化妝後影像含有用以化妝成化 20妝後影像之商品資訊41。動態影像伺服器42可將化妝前影 像及化妝後影像寫入共用記憶體27,且亦可將化妝前影像 及化妝後影像作為動態影像檔26輸出。 介面應用程式29係利用ActiveX控制器50及ActiveX viewer51,且包含有動態影像控制物件52、動態影像顯示物 22 200805175 件53及其他控制器54。又,介面應用程式29與模擬器主應 用程式28係藉由ActiveX&Socket而達成連結。 介面應用程式29可利用ActiveX viewer51,將寫入共用 記憶體27之化妝前影像及化妝後影像顯示於前述顯示器7 5 中。. The transparent plate 9 can be worn over a portion of the illuminating light from the makeup mold "semi-mirror (transparent mirror) 3 to reflect the ride light. The camera 2 is disposed at a position where the user can be photographed in front of the _ (4) imaginary device 1 through the half mirror 3 and the transparent plate 9, and the camera 2 is set at a line of sight height of 10 members. The camera 2 can photograph a user standing in front of the makeup simulation device 1 through the half mirror 3 and the transparent plate 9, and output a moving image. The display 7 is disposed at a position where the user standing in front of the makeup simulation device 1 can be seen through the half mirror 3 and the transparent plate 9. The light that is emitted from the display 7 is reflected by the half mirror 3 and is output through the transparent plate 9 to the side of the makeup simulation device. Therefore, the user can see the motion picture of the display display 7 from the side of the makeup simulation device. * The makeup simulation device 1 of Figs. 351 and 4 is a concept of a mirror portion of an item that is indispensable for makeup, and has a function of interaction with a user. That is, the makeup simulation device 1 has a feature that allows the user to feel like a mirror-like makeup. The makeup simulation device 1 sets the camera 2 at the position of the line of sight height, so that the face of the user standing in front of the makeup simulation device 1 can be more naturally photographed than the position of the camera 2 of the first embodiment. The makeup simulation device 1 performs image processing on the moving image 11 200805175 image that is rotated by the camera 2 by the control unit 8, and applies makeup to the user's face included in the moving image, and is aged as a digital mirror. 7. The makeup simulation device 1 can display various products (4), beauty information, and an image screen after makeup on the user's face on the display 7. Further, Fig. 5 is an external view showing a third embodiment of the makeup simulation device of the present invention. Also, the same components are attached to the same parts as in the previous figures. The makeup simulation device 1 of Fig. 5 includes a camera 2, a half mirror 3, an operation panel 4, a printer 5, and an illumination 6. The half mirror 3 can transmit a portion of the light from the light (e.g., 5 〇%), and 10 can reflect the rest (e.g., 50%) to the front thereof. Since there is no light in the dark, the half mirror 3 neither transmits nor reflects light from the dark. After the start of the makeup simulation, the illumination 6 adjusts the illuminance to brighten the display 7 side of the half mirror 3. Therefore, the half mirror 3 provided on the display direction side of the display 7 reflects the light from the user side (outside of the makeup simulation device 1 15) before the makeup simulation starts, and functions as a mirror portion. The half mirror 3 transmits the light from the side of the display 7 (inside the makeup simulation device 1) after the start of the makeup simulation, and functions as a glass. Therefore, the user can see the moving image displayed on the display 7 through the half mirror 3. Fig. 6 is a cross-sectional view showing a third embodiment of the makeup simulation device. Fig. 6 shows a makeup simulation device 1 including a camera 2' half mirror 3, an operation panel 4, a printer 5, a display 7, and a control unit 8. Also, the same components are attached to the same parts as the previous figures. The moving image displayed on the display 7 is transmitted as a half mirror 3 of the glass function, and the user can see the moving image displayed on the display 7 from the outside of the makeup emulation device 1. 12 200805175 The makeup simulation device of Figures 5 and 6 is based on the concept of a mirror that is indispensable for makeup, and has the function of interaction with the user. That is, the 'makeup simulation device' has a feature that allows the user to feel like a mirror-like natural makeup. The makeup simulation device 1 performs image processing on the moving image or the like output from the camera 2 by the control unit 8, and applies makeup to the face of the user included in the moving image, and displays it on the display 7 as a digital mirror. The makeup simulation device 1 can display various product information, beauty information, and an image of the user's face after makeup on the display 7. Further, Fig. 7 is an external view of a fourth embodiment of the makeup simulation device of the present invention. Fig. 8 is a cross-sectional view showing a fourth embodiment of the makeup simulation device. Also, the same component symbols are attached to the same parts as the previous figures. Makeup simulation device! The package includes a camera 2, a printer 5, an illumination 6, and a touch panel display 15. The makeup simulation device 1 of Figs. 7 and 8 is different from the makeup simulation device 1 of Figs. 1 and 2 in that the touch panel display 15 is used instead of the operation panel 4 and the display 7. The touch panel display 15 functions as the operation panel 4 and the display 7. The touch panel display 15 can display an operation image and output operation information after receiving the user's operation. The touch panel display 15 can display an operation image output from the control unit 8 and output the operation information to the control unit 8 after receiving the user's operation. The touch panel display 15 can display a moving image (main image) output from the control unit 8. The control unit 8 can receive the moving image output from the camera 2, and perform image processing on the moving image to perform makeup on the user's face included in the moving image and output it to the touch. 13 200805175 Panel display 15 . The makeup simulation device g of Fig. 7 and Fig. 8 has a function as a mirror portion of an item which is indispensable for makeup, and has a function of interaction with a user. That is to say, the makeup wheel device can make the user's money look like a mirror. The makeup simulation device 1 processes the moving image output from the camera 2 by the control unit 8, and applies the face of the user included in the moving image to the touch and displays it as a touch of the digital mirror. Panel display 15. The makeup simulation device 1 can display various product information, beauty information, and an image screen after makeup of the user's 10 face on the touch panel display 15. As shown in FIG. 9 and FIG. 10, the makeup simulation device of the fourth embodiment may be provided with a display case in which a test product (hereinafter referred to as a test article) for display by a user is displayed, and is sold as a self-sale. Use with the frame. Fig. 9 is an external view showing a fifth embodiment of the makeup simulation device of the present invention. Fig. 1 is a cross-sectional view showing a fifth embodiment of the present invention. Also, the same components as the previous figures are attached with the same component symbols. The makeup simulation device 1 includes a camera 2, a printer 5, an illumination 6, a touch panel display 15, a display case 16, and a 1C tag reader/writer 17. The makeup simulation device 1 of Figs. 9 and 10 differs from the makeup simulation device 1 of Figs. 7 and 8 in that the display case 16 and the 1C tag reading/writing device 17 are included. The display case 16 is a display of most test items. Also, the test article is accompanied by an IC (RFID) tag. The 1C label attached to the test item stores identification information identifying the test item. By the user taking the test article from the display box 16 and bringing it closer to the 1C tag reader/writer 17, the 1C tag reader/writer 17 can read the identification information of the test article according to the 1C tag reading 14 200805175. The 1C tag reader/writer 17 can transmit the identification information of the test article read from the tag to the control unit 8. The control unit 8 can receive the motion picture rotated from the camera 2, and the motion picture will be described later. Image processing, and using the test item corresponding to the identification information of the five items taken from the label, the image of the user's face contained in the motion image is outputted to the touch panel display 15 again. The correspondence table corresponding to the identification information may be provided in the makeup simulation device 1, or may be provided in another device that can obtain 10 from the makeup simulation device i through the network. The makeup simulation device 1 of the figure 9 and the figure 10 is displayed. An example of using a 1C tag for identifying a test article, but a bar code, a binary code, a label, etc. may also be used. Furthermore, the makeup simulation device i of FIG. 9 and FIG. 1 may also be in the display case 16 in advance. The structure of the test article that can be recognized is set (for example, the position recognition of a sensor such as a 15-photo sensor), and the display box 16 is notified to the control unit 8. The makeup simulation device of the figure 9 and the first figure is Indispensable when making makeup The mirror part of the project is a concept, and has the function of interaction with the user. That is, the makeup simulation device 1 has a feature that allows the user to feel like a mirror-like makeup from the 20th. The unit 8 performs image processing on the dynamic image to be output from the camera 2, and the user uses the measurement πα selected from the display box 16 to perform makeup on the face of the user included in the motion picture. It is not suitable for the touch panel display 15 as a digital mirror. 15 200505175 The makeup simulation device 1 can display various product information, beauty information, and a user's face after makeup on the touch panel display 15 By the user obtaining the label of the test article selected from the display box 16, the makeup simulation device 1 can acquire information such as the user's preference. 5 Further, if there is a makeup simulation capable of displaying a shelf such as a test article plus a product. In the case of the device 1, the product can be effectively used as a self-selling frame by displaying the product displayed on the touch panel in a frame 1 § 15 in advance. The makeup simulation device of the example is described as an example. Fig. 11 is a hardware structure diagram of one embodiment of the makeup simulation device. 10 Further, the same component symbols are attached to the same portions as those of the first and second drawings. The makeup simulation device 1 of FIG. 11 includes a camera 2, an operation panel 4, a printer 5, a display 7, an arithmetic processing device 10, a memory device 11, a drive device 12, and an auxiliary memory device which are mutually connected via a bus bar B. 13. The arithmetic processing device 10, the memory device 11, the drive device 1215, and the auxiliary memory device 13 of Fig. 11 constitute the control unit 8 of Fig. 2. The makeup simulation program of the present invention can control the makeup simulation device. At least a portion of the various programs. The makeup simulation program is provided by a transmission such as the recording medium 14. Also, the recording medium on which the makeup simulation program is recorded can use a recording medium such as a 20 CD-ROM, a floppy disk, or an optical disk that can record information optically, electrically, or magnetically; and an electrical record such as a ROM or a flash memory. Various types of recording media such as semiconductor memory of information. Further, when the recording medium 14 on which the makeup simulation program is recorded is set on the drive unit 12, the makeup simulation program can be mounted from the recording medium 16 200805175 i4 to the _memory device via the drive unit 12. I remember how to store the installed makeup simulation program and store the required broadcasts, materials, etc. The memory device η can read the makeup simulation program from the auxiliary memory device 13 at the time of startup and store it. Further, the arithmetic processing unit 10 can realize various processes to be described later in accordance with the makeup 5 simulation program stored in the memory device η. Fig. 12 is a flow chart showing one embodiment of the outline of the processing performed by the makeup simulation device. Fig. 13 is an image view showing an example of an operation screen displayed on the main surface of the display and the operation panel of the wood panel. Fig. 13 is a top view of the shirt. The 1 to ill is displayed on the main screen of the display 7. The screen image 2〇〇~幻〇10 is displayed on the operation screen of the operation panel 4. The control unit 8 can continuously receive the moving image captured by the camera 2. At this time, the control unit 8 displays the screen image 1 on the display and 7 and displays the face image 200 on the operation panel 4. Screen images 100 and 2 are examples of screen protectors. 15 Step 81 The control unit 8 continues to determine whether or not the received moving image contains the user's face, and repeats the processing of step S1 (No at S1) until the control unit 8 recognizes that the moving image contains the user's face. When the recognized motion picture contains the user's face (at 81), the control unit 8 proceeds to step S2 and activates the makeup simulation program including the makeup simulation. At this time, the control unit 8 displays the screen image 101 on the display 7, and displays the face image 201 on the operation panel 4. The facet image 101 is an example in which the camera 2 captures a moving image of the user's face. The facet image 201 is an example of a welcome annotation showing lateral movement. Proceeding to step S3, the control unit 8 starts the makeup simulation based on the software started in step S2. At this time, the control unit 8 displays the screen image 1〇2 on the display 7, and displays the face image 202 on the operation panel 4. The screen image 1〇2 is an example of a magic mirror performed after the camera 2 captures a moving image of the user's face. The screen image 202 is an example in which the horizontal image is displayed in the same manner as the face image 201. Proceeding to step S4, the control unit 8 performs makeup simulation as will be described later. In this case, the control unit 8 displays the screen images 1Q3 to ι6 in the order of the announcer 7, and displays the mask images 203 to 206 on the operation panel 4. The face images 103 to 106 are examples in which a user's face of four types of makeup type 1 (image) is displayed by makeup simulation. The screen images 203 to 206 are examples in which the makeup type contents (e.g., names, etc.) appearing on the face images 103 to 106 of the display 7 at this time are displayed. The control unit 8 sequentially displays the screen images 1〇3 to 1〇6 on the display 7 after the predetermined time elapses or when the user touches the operation panel 4, and displays the screen images 203 to 206 on the operation panel 4. When the predetermined time elapses or the user touches the operation panel 4, the control unit 8 proceeds to step S5, displays the screen image 1〇7 on the display 7, and displays the face image 207 on the operation panel 4. The screen image 1〇7 is the same as the facet image 1〇1. The member shows that the camera 2 captures a moving image of the user's face. Screen image The 207 series shows an example of selecting an image for selecting an image of the four types of makeup types (images). The user can select one type of image from the image selection screen by operating the operation panel 4. Proceeding to step S6, the control unit 8 repeats the processing of step S6 (in Hong to ^^) before selecting the image from the image selection screen. When the user selects one type of image from the image selection screen, the operation panel 4 receives the user's 18 200805175 and then outputs the operation information to the control unit 8. When it is determined that the user selects one type of image from the image selection screen (Yes in S6), the control unit 8 proceeds to step S7 and displays the image screen of the selected image on the display 7 and the operation panel 4. At this time, the control unit 8 is connected to the display image 108 of the display system, and displays the face image on the operation panel 4. The facet image 108 is an example of sequentially displaying an image image of a user's face after makeup of a different color type based on one image selected from the image selection screen, and product information for making up the image. The screen image 208 is an example of the image information selected in the image selection screen and the product information displayed on the side image 1〇8 of the display 7 at this time. Further, the user can also instruct printing by operating the operation panel 4. When the user's printout is received, the control unit 8 displays the figure image 109 on the display 7, and displays the face image 2〇9 on the display panel 4. Kneading image 15 shows an example of a printed image. The facet image 2〇9 shows an example of reading in the printer (4) 8 which can control the printer 5 and print out the image displayed on the display 7. Further, by operating the operation panel 4, the user can also instruct display and printing of a comparison screen composed of image frames before and after makeup. When the receiving 20 receives the display instruction of the user's vehicle, the control unit 8 displays the screen image U0 on the display 7. The facet image (4) is an example of a comparison screen composed of image frames before and after makeup. When the user's print instruction is received while the comparison screen is displayed, the control unit 8 controls the printer 5 and prints the comparison displayed on the display 7. 19 200805175 When the makeup simulation of the field user is completed, the control unit 8 displays the face images 111 and 210 as screen protectors on the display 7 and the operation panel 4, and ends the process. Further, the examples of Fig. 12 and Fig. 13 show four types of shadow images by makeup simulation, but images of four types of images can be displayed. Further, the makeup simulation device t displays an example in which the user's face is included in the moving image output from the camera 2, but the moving image captured in advance may be recorded in the auxiliary memory device 13 as a moving video. The user's face contained in the motion picture building is then made up. 1) The makeup simulation device 1 can also use the motion image output from the camera 2 in addition to the makeup simulation. Fig. 14 is a view showing an image of a process other than the makeup simulation by the makeup simulation device. The facet image 300 is an example of displaying a moving image of a user's face photographed by the camera 2. The makeup simulation set 1 can recognize the face of the user 15 contained in the motion image, and cut out the still picture 3 of the user's face (U. The makeup simulation device 1 can be stationary by the face analysis logic and the skin analysis logic. The face diagnosis and skin color diagnosis of the face 3〇1, and the screen image 302 showing the result is displayed on the display 7. Further, the face analysis logic and the skin analysis logic perform the face diagnosis and the skin color diagnosis of the still image 3G1. For example, the makeup simulation device 1 can also display a course selection screen 303 such as a face image 3〇3 on the display 7 and use it. Choose a variety of courses (such as trend, foundation, freedom). Make-up simulation device! Display 20 200805175 screen images 304~309 according to the user-selected course on the display 7, and simulate and suggest. Screen images 304~306 An example of a simulation screen for each course is displayed. Screen images 307 to 309 are examples of suggested screens for each course. For example, the basic course indicates diagnosis based on face type and skin color. As a result, 5 simulates and suggests the most appropriate makeup technique. In addition, the trend course represents the simulation and suggestion of the latest trend makeup. In addition, the free makeup course represents simulations and suggestions for items corresponding to the eyes, mouth, cheeks, and eyebrows. When receiving the printing instruction of the user in the state of displaying the simulated face or the suggestion screen, the control unit 8 can also control the printer 5 to print the 10 simulation screen or the recommended face displayed on the display 7. Next, the detailed description Fig. 15 is a system configuration diagram of one embodiment of the makeup simulation system of the present invention. The makeup simulation system 2 of Fig. 15 includes an analog camera 21, υ$Β 15 capture component 22, motion picture file 23, makeup camera 24, still picture system 25, motion picture file 26, shared memory 27, simulator main application 28 and interface application 29. Analog camera 21 is via USB port The component 22 outputs a captured moving image such as an NTSC mode. The dynamic image 20 output from the USB capturing component 22 is utilized as an API (should be The program interface) is an example of the chest (4) i and is input to the analog main application 28. Further, the motion picture file 23 can also be input to the simulator main application using DirectX 31. The makeup camera 24 is via mEE1394 as an example of the serial interface. The captured moving image is output. The moving image 21 output from the makeup camera 24 is input to the simulator main application 28 by using the dedicated API 32, and the simulator main application 28 can obtain the dynamic from the (4) image of the APm input. Image resolution and original image and still image resolution static image 5 & The moving image of the input moving image uses the resolution moving image as the original image, and trimming and reducing the original image. The analog main application 28 system trims the original image and obtains the image. Further, the simulator main application 28 performs reduction processing on the original image and obtains an image for face recognition processing. The face recognition processing unit % obtains the tracking point 34 for recognizing the face of the user, which will be described later, from the face recognition processing image by FFT (Idle Fourier Transform). The makeup processing unit 35 obtains a post-makeup image of the user's face contained in the pre-makeup image, which is subjected to makeup, eyebrows, eye shadow, lipstick, and cheek, based on the tracking point 34. The makeup treatment unit 35 includes a foundation treatment unit 36, an eye treatment unit 37, an eye shadow treatment unit 38, a lipstick treatment unit 39, and a cheek treatment unit 40°, and the makeup treatment unit 35 can make makeup images for makeup. Product information of the post-image 41. The motion picture server 42 can write the pre-makeup image and the post-makeup image to the shared memory 27, and can also output the pre-makeup image and the post-makeup image as the motion image file 26. The interface application 29 utilizes the ActiveX controller 50 and the ActiveX viewer 51 and includes a motion picture control object 52, a motion picture display object 22, 200505175, and other controllers 54. Further, the interface application 29 and the emulator main application 28 are linked by ActiveX & Socket. The interface application 29 can display the pre-makeup image and the post-makeup image written in the shared memory 27 on the display 75 by the ActiveX viewer 51.

10 1510 15

20 模擬态主應用程式28係對於從利用專用APJ32輸入之 動態影像得到的靜止晝面用解析度之靜止影像進行修整及 、、宿=處理。模擬态主應用程式28係對於靜止畫面用解析度 之靜止影像進行修整。再者,模㈣主應用程式Μ係對於 靜止旦面用解析度之靜止影像進行縮小處理以得到臉部辨 識處理用影像。臉部辨識處理部43係從臉部辨識處理用影 像得到後述用以辨識使用者臉部之追蹤點44。 如 杈擬裔主應用程式28係根據追蹤點44,從修整後的動 ㈣像所含有之仙者臉部取得詳細部位,絲據該詳细 部位得到用於臉型診斷及膚色診斷之類型、膚色等追加資 訊’且將於追縱點44附加追加資訊後之追縱點衫 白 於利用專用AH32輸入的動態影像之靜止晝面用解析:的 靜止影像輪iB ·至靜止畫面系統25。 又、 藉由前述臉型分析 靜止畫面系統25可使用追蹤點45, 邏輯及肌膚分析邏輯進行靜止影像3gi之臉型 二 診斷,並將顯示該結果之晝面影像3_示於顯㈣H 外,靜止晝面系統25亦可將畫面影脚〜309顯示於顯示器 弟Μ圖係顯示模擬主應用所進行的處理之晝面參像 23 200805175 圖。畫面影像400係顯示化妝前影像之例。畫面影像4〇1係 於化妝前影像重疊顯示得自於臉部辨識處理用影像之追縱 點34之例。又,畫面影像402係根據追蹤點34顯示化妝前影 像所含有的使用者臉部化妝後之化妝後影像之例。 5 以下,參照圖式依序詳細說明模擬器主應用程式28進 行的處理當中之臉部辨識處理、化妝處理。又,本實施例 係說明包含有粉底處理、眉處理、眼影處理、口紅處理及 臉頰處理之化妝處理,作為化妝處理之一例,但亦可為其 他組合者。 1〇 (臉部辨識處理) 第17圖係化妝前影像所含有的使用者臉部之影像書 面。第18圖係顯示追蹤點的一例之影像圖。模擬器主應用 程式28之臉部辨識處理部33係從第17圖之影像晝面得到第 18圖之用以辨識使用者臉部的45點追縱點34。第18圖之追 15蹤點34係其中一例,且亦可因應控制部8的處理能力或顯示 器7的精細度進行調整。 如此,藉由從化妝前影像所含有的使用者臉部得到追 蹤點34,化妝處理部35即可與追蹤點34對應地將化妝進行 方式或顏色預设成如第19圖之化妝處理參數標。 20 第19圖係化妝處理參數標的一例之構成圖。第19圖之 化妝處理參數檔係依照眼睛、嘴巴、臉頰等與追蹤點34對 應地設定化妝進行方式或顏色者。又,化妝處理參數檔係 依照母種化妝類型(影像)而設定者。第19圖之化妝處理參數 檔係顯示優雅等級之例。 24 200805175 (口紅處理) 如第20圖所示,模擬器主應用程式28所含有的口紅處 理部39係參照嘴唇8點及鼻子3點之追蹤點34,進行口紅處 理。第20圖係顯示口紅處理所參照的追蹤點之影像圖。 5 第21圖係顯示口紅處理的一例之流程圖。口紅處理可 大致區分為步驟Sl〇之前準備處理及步驟S2〇之上色處理。 前準備處理包含有步驟S11之切出&旋轉處理、步驟S12之 輪廓取得用影像之製成處理、步驟S13之輪廓取得處理、步 驟S14之樣條曲線產生處理、步驟S15之轉回處理、步驟si6 10之上色圖產生處理。上色處理包含有根據步驟S2i之根據上 色圖的上色處理、步驟S22之除錯(debug)&設計用繪製處 理。 繼續步驟sii之切出&旋轉處理,且口紅處理部39可從 臉4辨識處理用影像切出含有使用者唇部之部分影像 15 500 ’並將該部分影像500旋轉成處理用姿勢以得到部分影 像 501 〇 繼續步驟S12之輪廓取得用影像之製成處理,且口紅處 理部39根據部分影像501製作輪廓取得用影像。繼續步驟 S13之輪廓取得處理,且口紅處理部39如部分影像5〇2所示 20地利用點線從輪廓取得用影像取得唇部輪廓。 第22圖係顯示輪廓取得處理的一例之影像圖。輪廓取 得用影像600係重疊顯示唇部8點及鼻子3點之追蹤點。口紅 處理部39係以唇部8點及鼻子3點之追縱點為基礎進行再搜 尋並取得輪廓取得用影像6(Π之點。 25 200805175 第23圖係輪庵取得用影像600之唇部8點及鼻子3點之 ^ k "、έ與以唇部8點及鼻子3點之追蹤點為基礎進行再搜 尋而取彳于的輪廓取得用影像601之點的比較晝面。又,當前 述再技尋不川員暢時,則採用根據輪靡取得用影像刪的唇部 •及"Γ +3點之追蹤點而算出的第24圖之預設點(default point) 〇 I、、第24®係顯*從唇部S點及鼻子3點之追蹤點求得預設 ’占的處理之一例之影像圖。口紅處理部39係參照鼻子3點之 追縱=,求得1㈣上唇5點之預設點。 10、繼續步驟S14之樣條曲線產生處理,且口紅處理部39 次樣條内插輪廓取得用影像6〇1之點或第24圖之預設 點丄亚完成第25圖所示輪廓。第25圖係顯示根據點或預設 ”、、$成輪廓的處理之影像圖。在如部分影像5⑽地完成唇部 •作才口紅處理部39即繼續步驟S15,並進行回復在步驟 U的切出&旋轉處理所進行的旋轉之轉回處理。 钱之上色圖產生處理,且口紅處理部%根據 部分影像700之亮度及彩度產生決定上色強度之上色圖。第 圖係4不上色圖產生處理的_例之影像圖。具體而言, 扣2紅處理物係根據亮度及彩度產生顯示上色強度之灰度 衫像701。口紅處理部39僅將由步驟S14的樣條曲線產生處 =完成的唇部㈣所包圍之部分影像搬,作為上色圖504 從灰度影像701中切出。 、1只步驟S21之根據上色圖的上色處理,且口紅處理部 %根據步驟S16的上色圖產生處理所產生之上色圖烟、及 26 200805175 設定於如第19圖的化妝處理參數檔之化妝進行方式或指定 色’對化妝前影像進行著色處理。 第27圖係顯示根據上色圖的上色處理之_例之影像 圖。口紅處理部39係根據設定於如第19圖的化妝處理參數 5檔801之化妝進行方式或指定色、及步驟S16的上色圖產生 處理所產生之上色圖802,對化妝前影像803進行著色處 理,並得到化妝後影像8〇4。 又,繼續步驟S22之除錯&設計用繪製處理,且口紅處 理部39在進行除錯&設計用繪製處理後,結束口紅處理。 10 (眼影處理) 如第28圖所示,模擬器主應用程式28所含有的眼影處 理部38係參照左右分開的眼睛3點及眉1點之追蹤點34,進 行眼影處理。第28圖係顯示眼影處理所參照的追蹤點之影 像圖。 15 第29圖係顯示眼影處理的一例之流程圖。眼影處理可 大致區分為步驟S30之前準備處理及步驟S40之上色處理。 雨準備處理係由步驟S31之基本輪廓產生處理所構成者。 上色處理係依上色類型而重複進行,且包含有步驟S41 之上色輪廓產生處理、步驟S42之上色中心產生處理、步驟 2〇 S43之上色圖產生處理、步驟S44之根據上色圖的上色處 理、步驟S45之除錯&設計用繪製處理。 繼續步驟S31之基本輪廓產生處理,且眼影處理部38 係根據臉部辨識處理用影像,如部分影像9〇〇所示地得到使 用者臉部之眼睛形狀。第30圖係顯示基本輪廓產生處理的 27 200805175 一例之影像圖。 如部分影像1001所示,眼影處理部38係從眼睛中心開 始搜尋上下方向並辨識2點眼睛輪廓(上邊界&下邊界),以 產生作為基本上色輪廓之輪廓。眼影處理部38係於辨識的 5眼睛輪廓2點、眼尾及眼頭4點上追加利用樣條内插產生之4 點,以產生如部分影像1〇〇2之合計8點的多角形。 繼續步驟S41之上色輪廓產生處理,且眼影處理部% 如部分影像901地產生上色輪廓。第31圖係顯示上色輪靡產 生處理的一例之影像圖。眼影處理部38係為了產生上色輪 1〇廓,而如部分影像1101所示地以眼尾及眼頭為基準產生上 色輪廓又關於輪廊擴充、了頁點位移等亦可利用GUI進行 參數指定。 繼續步驟S42之上色t心產生處理,且眼影處理部38 如部分影像902的「鲁」地產生上色中心位置。繼續步驟⑷ 15之上色圖產生處理,且眼影處理部38如部分影像9〇3地產生 決定上色強度之上色圖。 具體而言,上色圖產生處理係與上色中心到多角形邊 緣之=離對應地決定上色強度。例如,眼影處理部%係隨 著越罪近邊緣越減弱上色強度地決定上色圖。又,上色圖 20產生處理係以從上色輪廓去掉基本輪廓之部分為對象。 如第32圖所示,眼影處理部38係藉由對產生的上色圖 進行淡化處理來產生更滑順的漸層感。第32圖係顯示未進 行淡化處理之上色圖、及進行淡化處理之上色圖的一例之 影像圖。 28 200805175 繼續步驟S44之根據上色圖的上色處理,且眼影處理部 38根據步驟S43的上色圖產生處理所產生之上色圖、及 設定於如第19圖的化妝處理參數檔之化妝進行方式或指定 色,對化妝前影像進行著色處理,並得到化妝後影像。 接著,繼續進行步驟S45之除錯&設計用繪製處理,且 眼影處理部38在進行除錯&設計用繪製處理後,結束眼影處 理。又,眼影處理部38亦可藉由依上色類型重複進行步驟 S41〜S45之處理,來實現多色上色。 (臉頰處理) 如第33圖所示,模擬器主應用程式28所含有的臉頰處 理部40係參照眼尾&嘴角(左右分開)、眼間&鼻中心(穩定:) 之追蹤點34,進行賴處理。幻3_係齡_處理所表 照的追蹤點之影像圖。 4 1520 The analog main application 28 performs trimming, and sinking = processing on the still image with the resolution of the still surface obtained from the moving image input by the dedicated APJ32. The analog main application 28 trims the still image with the resolution of the still picture. Further, the modulo (4) main application 缩小 reduces the still image with the resolution of the still surface to obtain the image for the face recognition processing. The face recognition processing unit 43 obtains the tracking point 44 for recognizing the face of the user, which will be described later, from the image for face recognition processing. If the main application 28 is based on the tracking point 44, the detailed part is obtained from the face of the fairy contained in the modified (4) image, and the type of face diagnosis, skin color diagnosis, skin color, etc. are obtained according to the detailed part. The additional information is added, and the additional information is added to the tracking point 44, and the tracking image is displayed on the still image of the moving image input by the dedicated AH32: the still image wheel iB to the still picture system 25. Further, the face analysis still picture system 25 can perform the face type 2 diagnosis of the still image 3gi using the tracking point 45, the logic and the skin analysis logic, and display the face image 3_ displaying the result outside the display (4) H, and still. The surface system 25 can also display the screen shadows ~ 309 on the display screen of the display system to display the image of the simulation of the main application 23 200805175. The screen image 400 is an example of displaying a pre-makeup image. The screen image 4〇1 is an example in which the image before the makeup is superimposed and displayed from the tracking point 34 of the face recognition processing image. Further, the screen image 402 is an example of displaying a post-makeup image of the user's face makeup contained in the pre-makeup image based on the tracking point 34. 5 Hereinafter, the face recognition processing and the makeup processing in the processing performed by the simulator main application 28 will be described in detail with reference to the drawings. Further, in the present embodiment, a makeup treatment including foundation treatment, eyebrow treatment, eye shadow treatment, lipstick treatment, and cheek treatment is described as an example of makeup treatment, but it may be other combinations. 1〇 (Face recognition processing) Figure 17 shows the video surface of the user's face contained in the pre-makeup image. Fig. 18 is an image diagram showing an example of a tracking point. The face recognition processing unit 33 of the simulator main application 28 obtains the 45-point tracking point 34 for recognizing the user's face in Fig. 18 from the image plane of Fig. 17. The tracking point 34 of Fig. 18 is one example, and may be adjusted in accordance with the processing capability of the control unit 8 or the fineness of the display 7. In this way, by obtaining the tracking point 34 from the face of the user included in the pre-makeup image, the makeup processing unit 35 can preset the makeup making mode or color to the makeup processing parameter as shown in FIG. 19 corresponding to the tracking point 34. . 20 Fig. 19 is a block diagram showing an example of a cosmetic processing parameter. The makeup processing parameter file of Fig. 19 sets the makeup mode or color in accordance with the eye, the mouth, the cheek, and the like in accordance with the tracking point 34. Further, the makeup processing parameter file is set in accordance with the type of makeup (image) of the parent type. The makeup processing parameters of Fig. 19 show examples of elegant levels. 24 200805175 (Brush treatment) As shown in Fig. 20, the lipstick processing unit 39 included in the simulator main application 28 performs lipstick treatment with reference to the tracking point 34 of 8 points of the lips and 3 points of the nose. Figure 20 shows an image of the tracking point to which the lipstick treatment is referred. 5 Fig. 21 is a flow chart showing an example of lipstick treatment. The lipstick treatment can be roughly classified into the preparation processing before the step S1 and the color processing at the step S2. The pre-preparation processing includes the cutting out & rotation processing in step S11, the contour obtaining image forming process in step S12, the contour obtaining processing in step S13, the spline curve generating processing in step S14, the turning back processing in step S15, The color map generation processing of step si6 10 is performed. The coloring process includes a coloring process according to the color chart according to step S2i, a debugging process of step S22, and a design drawing process. Continuing the cutting & rotation processing of step sii, the lipstick processing unit 39 can cut out a partial image 15 500 ' containing the user's lip from the face 4 recognition processing image and rotate the partial image 500 into a processing posture to obtain The partial image 501 continues to be processed for the contour acquisition image in step S12, and the lipstick processing unit 39 creates a contour acquisition image based on the partial image 501. The contour acquisition processing of step S13 is continued, and the lipstick processing unit 39 obtains the lip contour from the contour acquisition image by the dotted line as indicated by the partial image 5〇2. Fig. 22 is a view showing an image of an example of the contour acquisition processing. The contour is obtained by superimposing the tracking point of 8 points of the lip and 3 points of the nose by the image 600. The lipstick processing unit 39 re-searches and obtains the contour acquisition image 6 based on the 8 points of the lip and the 3 points of the nose (the point of the image is obtained. 25 200805175 Fig. 23 shows the lip of the image 600 for the rim acquisition 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 When the above-mentioned re-seeking is not smooth, the default point of the 24th figure calculated based on the rim of the image and the tracking point of the + Γ +3 points is used. And the 24th system display* obtains an image of one of the preset 'occupation processing' from the tracking point of the lip S point and the nose 3 point. The lipstick processing unit 39 refers to the tracking of the nose 3 points, and obtains 1 (4) The preset point of 5 points on the upper lip. 10. Continue the processing of the spline curve in step S14, and the lipstick processing part 39 times the spline interpolation contour to obtain the point of the image 6〇1 or the preset point of the 24th figure. Complete the outline shown in Figure 25. Figure 25 shows the image of the processing according to the point or preset "," contour. (10) The completion of the lip-making lipstick processing unit 39 proceeds to step S15, and performs a rotation return process that is performed by the cut-out & rotation processing in step U. The money coloring map generation processing and the lipstick processing The part % generates a color map for determining the color intensity according to the brightness and chroma of the partial image 700. The figure 4 is an image view of the example of the non-color map generation process. Specifically, the buckle 2 red processing system is based on the brightness. And the chromaticity produces a gradation shirt image 701 showing the color intensity. The lipstick processing unit 39 only moves a part of the image surrounded by the lip portion (four) where the spline curve generation in step S14 is completed, as the color map 504 from the gradation The image 701 is cut out. One step S21 is based on the coloring process of the coloring image, and the lipstick processing unit % is based on the coloring pattern generated by the coloring pattern generating process of step S16, and 26 200805175 is set as in the first The makeup processing method or the designated color of the makeup processing parameter file of Fig. 19 is used to color the image before makeup. Fig. 27 is a view showing an image of the coloring process according to the coloring map. The lipstick processing unit 39 is based on the setting. As shown in Figure 19 The makeup processing method or the designated color and the color map 802 generated by the color map generation processing in step S16 of the makeup processing parameter are subjected to coloring processing on the pre-makeup image 803, and the post-makeup image 8〇4 is obtained. The processing of the debugging & designing of the step S22 is continued, and the lipstick processing unit 39 ends the lipstick processing after performing the debugging processing for the debugging & design. (Eyeshadow processing) As shown in Fig. 28, the simulator main The eye shadow processing unit 38 included in the application 28 performs eye shadow processing by referring to the eye point 3 points of the left and right eyes and the tracking point 34 of the eye point 1 point. Fig. 28 is a view showing an image of a tracking point to which eye shadow processing is referred. 15 Fig. 29 is a flow chart showing an example of eye shadow processing. The eye shadow processing can be roughly classified into the preparation processing before step S30 and the color processing in step S40. The rain preparation processing is constituted by the basic contour generation processing of step S31. The coloring process is repeated according to the coloring type, and includes the coloring contour generating process of step S41, the coloring center generating process of step S42, the coloring image generating process of step 2〇S43, and the coloring according to step S44. The coloring process of the figure, the debugging of step S45, and the design drawing process. The basic contour generation processing of step S31 is continued, and the eye shadow processing unit 38 obtains the eye shape of the user's face as shown by the partial image 9〇〇 based on the face recognition processing image. Figure 30 is an image diagram showing an example of the basic contour generation process. As shown in the partial image 1001, the eye shadow processing unit 38 searches for the up and down direction from the center of the eye and recognizes the two-point eye contour (upper boundary & lower boundary) to generate a contour as a basic color contour. The eye shadow processing unit 38 adds four points generated by spline interpolation to two points of the recognized 5 eye contours, the end of the eye, and the four points of the eye to generate a polygon of 8 points as a total of the partial images 1〇〇2. The color contour generation processing of step S41 is continued, and the eye shadow processing portion % generates a color contour as a partial image 901. Fig. 31 is a view showing an image of an example of the processing of the coloring rim. The eye shadow processing unit 38 generates a color contour based on the end of the eye and the eye as shown in the partial image 1101, and expands the wheel gallery, shifts the page point, etc., as shown in the partial image 1101. Parameter specified. The color centering processing of step S42 is continued, and the eye shadow processing unit 38 generates a color center position as "lurp" of the partial image 902. The color map generation processing of the step (4) 15 is continued, and the eye shadow processing unit 38 generates a color map for determining the color intensity as a part of the image 9〇3. Specifically, the color map generation processing system determines the color intensity corresponding to the color center to the polygon edge. For example, the eye shadow processing unit % determines the color map as the sinful near edge increases the color intensity. Further, the color map 20 generates a processing system to remove a portion of the basic outline from the colored outline as an object. As shown in Fig. 32, the eye shadow processing unit 38 produces a smoother gradation by smoothing the generated color map. Fig. 32 is a view showing an image of a color map which has not been subjected to the desalination process and a color map which has been subjected to the desalination process. 28 200805175 Continues the coloring process according to the coloring map of step S44, and the eye shadow processing unit 38 performs the coloring map generated by the coloring map generation processing of step S43 and the makeup set by the makeup processing parameter file set as in Fig. 19 Perform a coloring process on the pre-makeup image and obtain a post-makeup image. Next, the debugging processing for the debugging & designing of step S45 is continued, and the eye shadow processing unit 38 ends the eye shadow processing after performing the debugging processing for the debugging & design. Further, the eye shadow processing unit 38 can also perform multicolor coloring by repeating the processing of steps S41 to S45 in accordance with the color type. (Cheek treatment) As shown in Fig. 33, the cheek processing unit 40 included in the simulator main application 28 refers to the tracking point 34 of the eye tail & mouth angle (left and right separation), eye & nose center (stability:). , Lai processing. The illusion 3_ _ age _ processing image of the tracking point. 4 15

20 第34圖係顯示臉頰處理的一例之流程圖。臉賴處理包 s有步驟S50之上色輪廓產生處理、步驟S51之上色處理、 乂驟852之除錯&設計用繪製處理。 …繼够驟S5〇之上色輪廓產生處理,且臉頰處理部仙 係為了產生上色輪廓,而以眼尾及嘴角為基準產生輪廊多 角形作為上色輪廓。第35圖係顯示上色輪廓的_例之二 圖。又’輪廓多角形之點數 '大小、形狀或位置等 用GUI進行參數指定。 、麄,步驟S51之上色處理,且臉頰處理部4〇係與上色中 心到輪摩多角形邊緣之距離對應地決定上色強度。、又在 上色強度決定方面耗費過多處理成本時,亦可降低解析度 29 200805175 (利用GUI進行參數指定)來決定上色強度(減少成馬賽克 狀)。臉頰處理部40係根據決定的上色強度、及設定於如第 19圖的化妝處理參數檔之化妝進行方式或指定色,對化妝 前影像進行著色處理,並得到化妝後影像。 接著,繼續進行步驟M2之除錯&設計用繪製處理,且 臉頰處理部40於進行除錯&設計用繪製處理後,結束臉頰處 理。 (眉處理) 10 15 20 如第36圖所示’模擬器主應用程式28所含有的眉處理 437係芩恥左右分開的眼尾、眼睛中心及眉^點之追蹤點 34進行眉處理。第36圖係顯示眉處理所參照的追蹤點之 影像圖。 消去處理、步驟S80之眉 附處理 。、第37圖係顯不眉處理的一例之流程圖。眉處理可大致 、品…為乂驟860之眉輪靡取得處理、步驟S70之眉原本領域 形變化處理、步驟S90之變化眉貼 i眉原本領域消錢理係由步 驟S71之領域膨脹處理、步 驟S72之眉消去汐 ^ 所構成者。眉形變化處理係由與步驟 的變化指定 尸吝4 ♦ 、…之指示曲線製成處理、步驟S82之變化 眉產生處理所構成者。 繼續進行呆 根據臉部_:86^眉輪廓轉處理,且眉處理部37 用者臉部之眉处理用影像,如部分影像2001所示地得到使 ,目形。第38圖係顯示眉輪廓取得處理的一例之 衫傢圖。如部分旦 衫像2101所示,眉處理部37係以眉尾(實際 30 200805175 上在眉中心附近)為中心地搜尋左右眉部,以取得眉輪廊。 如部分影像2102所示,眉處理部37係根據搜尋結果辨識眉 輪廓形狀。 ^ 繼續進行步驟S71之領域膨脹處理,且眉處理部37使顯 5示辨識的眉輪廓形狀之領域膨脹。繼續進行步驟S72之眉、、肖 ’ 去處理,且眉處理部37使用附近膚色塗滿膨脹領域,並藉 此消去眉。又,眉處理部37於膨脹領域之邊界部分進行融 合處理。 • 繼續進行步驟S81之與變化指定對應的指示曲線製成 10處理,且眉處理部37依照指定使顯示辨識的眉輪廓形狀之 領域(輪廓)變化。第39圖係顯示與變化指定對應之指示曲線 製成處理的一例之影像圖。 如部分影像2201所示,眉處理部37係將顯示辨識的眉 輪廓形狀之領域,替換成由橫向軸線及多數縱向矩形構成 15之輪廓2202,並藉由改變軸線形狀及矩形高度,進行如輪 廓2203之變化處理。 • 繼續進行步驟S82之變化眉產生處理,且眉處理部37 從輪廓2203產生變化眉。繼續進行步驟S9〇之變化眉貼附處 * 理,且眉處理部37對化妝前影像貼上變化眉,並得到化2 ^ 20 後影像。 (粉底處理) 如第40圖所示,模擬器主應用程式28所含有的粉底處 理部36係參照左右分開的眉尾、眉頭及眉以、眼間及鼻中 心的追賴34 ’進行粉絲理。第侧係顯雜底處理所 31 200805175 參照的追蹤點之影像圖。 第41圖係顯示粉底處理的一例之流程圖。粉底處理包 含有步驟S101之輪廓產生處理、步驟§1〇2之對於對象影像 的淡化處理、步驟S103之影像貼附處理。又,步驟sl〇1〜sl〇3 5 之處理係依對象區域重複進行。 繼續進行步驟S101之輪廓產生處理,且粉底處理部% 如第42圖之影像晝面地產生額頭、鼻子、臉頰(左右)等增 (4處)輪廓。第42圖係顯示輪廓的一例之影像晝面。又,各 瞻輪廓大小、位置等亦可利用進行參數指定。 10 繼續進行步驟S102之對於對象影像的淡化處理,且粉 底處理部36如第43圖所示地對於與產生輪廊對應之對象影 像進仃淡化處理。第43圖係顯示淡化處理前的原本影像、 及淡化處理後的原本影像之一例之影像圖。粉底處理部% 叮藉由對於對象影像之淡化處理,使肌膚之粗細等平滑化。 15 繼續進行步騾S103之影像貼附處理,且粉底處理部36 於化妝前影像的額頭、鼻子、臉頰(左右)等3種(4處)輪廓貼 • 上淡化處理後之對象影像,並得到錄後影像。 以上,根據本發明,可於處理負擔小的情形下,正確 地對動態影像所含有的使用者臉部進行化妝。又,專利申 20請範圍所揭示之攝影機構相當於照相機2、控制機構相當於 控制部8、|頁示機構相當於顯示器7、臉部辨識處理機構相 當於臉部辨識處理部33、化妝處理機構相當於化妝處理部 35、操作機構相當於操作面板4、半鏡機構相當於半鏡3、 印刷機構相當於列印機5。 32 200805175 又,本發明並不限定於具體揭示之實施例’且可在不 脫離專利申請範圍的情形下,進行各種變化或變更。 本七明之化純擬裝置丨屬於即時模擬,故與利用靜 止畫面者或習知化妝模擬裝置!不同,可瞬間辨識臉部之追 5蹤點,並以該追縱點為基礎進行模擬,因此,可做到過去 無法完成之下述事項。 本^月之化妝核擬裝置!可進行即時模擬,且本發明之 化妝模擬裝置i可如過去般從臉部正面進行模擬,亦可從側 面進行模擬等,因此,可輕易地做到臉紅等之模擬效果或 10 技術上的確認。 本發明之化妝模擬農置丨係藉由㈣模擬,而可由過往 的平面表現到亦可表現以立體捕捉臉部之分析、立體感或 質感。 又,本發明之化妝模擬裝置同時辨識多數人之臉 部,故可同時進行多數人之即時模擬。本發明之化妝模擬 裝置W於臉部辨識機能優良,故可配合各種臉型 '或男 性、女性地自動分類,並進行配合臉型或分類之化妝。例 如,本發明之化妝模擬裝置!可兩人同時各自進行化妝模 擬。 20 本國際申請案係基於2006年1月Π日申請的日本專利 公開公報第2006-9268號而主張優先權者,且本國際申請案 援用該2006-9268號的所有内容。 、 【圓式簡單說明】 第1圖係本發明化妝模擬裝置的第—實施例之外觀圖。 33 200805175 又,本發明並不限定於具體揭示之實施例,且可在不 脫離專利申請範圍的情形下,進行各種變化或變更。 本發明之化妝模擬裝置1屬於即時模擬,故,與利用靜 止晝面者或習知化妝模擬裝置以同,可瞬間辨識臉部之追 5蹤點,並以該追縱點為基礎進行模擬,因此,可做到過去 無法完成之下述事項。 10 技術上的確認。 本發明之化妝模縣置1可進行即日«擬,且本發明之 化妝模擬裝置丨可如過去般從臉部正面進行紐,亦可從側 面進行模擬等,因此,可輕易地做到臉紅等之模擬效果或 質感 本發明之化妝模難置m藉由㈣模擬,而可由過往 的平面表關亦可表現以立_捉臉部之分析、立體感或 又,本發明之化妝模擬裝置!可同時辨識多數人之臉 15部,故可同時進行多數人之即時模擬。本發明之 裝置!由於臉部辨識機能優良,故可配合各種臉型、或男 性、女性地自動分類,並進行配合臉型或分類之化妝。例 如,本發明之化妝模擬裝置!可兩人同時各自進行化妝模 擬。 曰申請的日本專利 ’且本國際申請案 20 本國際申請案係基於2006年1月17 公開公報第2006-9268號而主張優先權者 援用該2006-9268號的所有内容。20 Figure 34 is a flow chart showing an example of cheek processing. The face processing package s has the color outline generation processing of step S50, the coloring processing of step S51, the debugging of step 852, and the design drawing processing. The color contour is generated by the step S5, and the cheek processing portion is used to generate the color contour, and the vertex polygon is generated as the color contour based on the eye end and the corner of the mouth. Figure 35 shows a second example of a colored outline. Further, the number of points of the outline polygon 'size, shape or position, etc. The parameters are specified by the GUI. Then, the coloring process of step S51 is performed, and the color of the color is determined in accordance with the distance from the center of the coloring to the edge of the polygon of the wheel. In addition, when the processing cost is too high, the resolution can be reduced. 29 200805175 (Specification of parameters by GUI) determines the color intensity (reduced into a mosaic). The cheek processing unit 40 performs coloring processing on the pre-makeup image based on the determined coloring intensity and the makeup performing method or the designated color set in the makeup processing parameter file of Fig. 19, and obtains a post-makeup image. Next, the debugging processing of the step M2 is continued, and the cheek processing unit 40 ends the cheek processing after performing the debugging processing for the debugging & design. (Browse processing) 10 15 20 As shown in Fig. 36, the eyebrow processing 437 included in the simulator main application 28 is an eyebrow processing for the eye tail, the eye center, and the tracking point 34 of the eyebrow point. Figure 36 shows an image of the tracking point referenced by the eyebrow processing. The elimination process, the eyebrow of step S80, is attached. Fig. 37 is a flow chart showing an example of the processing of the eyebrows. The eyebrow processing can be roughly, the product is the process of obtaining the eyebrow rim of step 860, the process of changing the shape of the eyebrow of step S70, the change of step S90, the eyebrow sticking of the eyebrow, the field of money elimination by the field of step S71, The eyebrow of step S72 eliminates the constituents of 汐^. The eyebrow change processing is performed by the process of specifying the indication curve of the corpse 4 ♦ , ... with the change of the step, and the change of the step S82. The image is processed according to the face _: 86^ eyebrow contour, and the eyebrow processing unit 37 uses the image for the eyebrow processing of the face, as shown in the partial image 2001. Fig. 38 is a diagram showing an example of the eyebrow contour acquisition processing. If the partial shirt is shown as 2101, the eyebrow processing unit 37 searches for the left and right eyebrows centering on the eyebrow tail (actually near the center of the eyebrow on 200805175) to obtain the eyebrow porch. As shown in the partial image 2102, the eyebrow processing unit 37 recognizes the contour shape of the eyebrow based on the search result. ^ The field expansion processing of step S71 is continued, and the eyebrow processing section 37 expands the field of the recognized eyebrow contour shape. The eyebrows of the step S72 are continued, and the eyebrow processing section 37 fills the expansion field with the nearby skin color, thereby eliminating the eyebrow. Further, the eyebrow processing unit 37 performs a fusion process at the boundary portion of the expansion region. • The instruction curve corresponding to the change designation in step S81 is continued to be processed 10, and the eyebrow processing unit 37 changes the field (contour) of the eyebrow contour shape of the display recognition in accordance with the designation. Fig. 39 is a view showing an image of an example of the processing of the indication curve corresponding to the change designation. As shown in the partial image 2201, the eyebrow processing unit 37 replaces the field of the recognized eyebrow contour shape with the contour 2202 of the horizontal axis and the plurality of longitudinal rectangles 15 and performs the contouring by changing the axis shape and the rectangular height. 2203 change processing. • The change eyebrow generation processing of step S82 is continued, and the eyebrow processing unit 37 generates a change eyebrow from the outline 2203. The process of step S9 is continued, and the eyebrow processing unit 37 attaches a change eyebrow to the image before makeup, and obtains the image after 2^20. (Powder Treatment) As shown in Fig. 40, the foundation treatment unit 36 included in the simulator main application 28 refers to the left and right eyebrows, the brows and the eyebrows, and the eye and the center of the nose. . The first side is a miscellaneous treatment station. 31 200805175 The image of the tracking point referenced. Fig. 41 is a flow chart showing an example of foundation treatment. The foundation treatment package includes the contour generation processing of step S101, the desalination processing for the target image of step §1〇2, and the image attachment processing of step S103. Further, the processing of steps sl1 to sl3 is repeated in accordance with the target area. The contour generation processing of step S101 is continued, and the foundation treatment unit % produces an outline of the forehead, nose, cheek (left and right), etc., as shown in Fig. 42. Fig. 42 is an image showing an example of a contour. In addition, the outline size, position, and the like can also be specified by parameters. (10) The desalination process for the target image in step S102 is continued, and the foundation processing unit 36 performs the desalination process on the target image corresponding to the generated carousel as shown in Fig. 43. Fig. 43 is a view showing an image of an original image before the desalination process and an original image after the desalination process. The foundation treatment unit% 平滑 smoothes the thickness of the skin by the desalination treatment of the target image. 15 The image sticking process of step S103 is continued, and the foundation processing unit 36 fades the processed target image on the forehead, nose, and cheek (left and right) of the image before makeup, and obtains the image after the image is processed. Recorded images. As described above, according to the present invention, it is possible to accurately apply makeup to the face of the user included in the moving image when the processing load is small. Further, the photographing means disclosed in the scope of the patent application 20 corresponds to the camera 2, the control means corresponds to the control unit 8, the page display means corresponds to the display 7, the face recognition processing means corresponds to the face recognition processing section 33, and the makeup processing The mechanism corresponds to the makeup processing unit 35, the operation mechanism corresponds to the operation panel 4, the half mirror mechanism corresponds to the half mirror 3, and the printing mechanism corresponds to the printer 5. Further, the present invention is not limited to the specifically disclosed embodiments, and various changes and modifications may be made without departing from the scope of the patent application. This Qiming's purely pseudo-device is a real-time simulation, so it is used with a static screen or a custom makeup simulator! Differently, the face tracking point of the face can be instantly recognized, and the simulation is performed based on the tracking point. Therefore, the following items that could not be completed in the past can be achieved. This ^ month's makeup verification device! Instant simulation can be performed, and the makeup simulation device i of the present invention can be simulated from the front of the face as in the past, and can be simulated from the side, so that the simulation effect such as blush or the technical confirmation can be easily performed. . The makeup simulation agricultural system of the present invention can be simulated by the past plane, and can be expressed by the past plane to capture the stereoscopic analysis, stereoscopic effect or texture. Further, the makeup simulation device of the present invention simultaneously recognizes the faces of a majority of people, so that simultaneous simulation of most people can be performed at the same time. The makeup simulation device of the present invention has excellent facial recognition function, so that it can be automatically classified with various face types or males and females, and can be combined with face type or classified makeup. For example, the makeup simulation device of the present invention! Both of them can perform cosmetic simulations at the same time. The international application is based on the Japanese Patent Laid-Open No. 2006-9268, filed on the date of January 1, 2006, the entire disclosure of which is incorporated herein by reference. [Circular Simple Description] Fig. 1 is an external view of a first embodiment of the makeup simulation device of the present invention. Further, the present invention is not limited to the specific disclosed embodiments, and various changes and modifications may be made without departing from the scope of the patent application. The makeup simulation device 1 of the present invention belongs to the instant simulation, so that it can instantly recognize the 5 trace points of the face and simulate it based on the tracking point, similarly to the use of the stationary face masker or the conventional makeup simulation device. Therefore, the following items that could not be completed in the past can be achieved. 10 Technical confirmation. The makeup model of the present invention can be carried out on the same day, and the makeup simulation device of the present invention can be performed from the front of the face as in the past, and can be simulated from the side, so that blushing can be easily performed. The simulation effect or the texture of the present invention is difficult to set by the (4) simulation, and can be expressed by the past planes. The analysis of the face, the stereoscopic effect or the makeup simulation device of the present invention! It can recognize 15 faces of most people at the same time, so it can simultaneously simulate most people. The device of the present invention is excellent in facial recognition function, and can be automatically classified with various face types, or males and females, and can be combined with face type or classified makeup. For example, the makeup simulation device of the present invention! Both of them can perform cosmetic simulations at the same time.曰Applied Japanese Patent ‘and this International Application 20 This International Application is based on the January 17, 2006 publication No. 2006-9268 and claims the priority to all of the contents of the 2006-9268.

【圖式簡單明;J 實施例之外觀圖。 弟1圖係本發明化妝模擬裝置的第_ 33 200805175 第2圖係化妝模擬裝置的第一實施例之截面圖。 第3圖係本發明化妝模擬裝置的第二實施例之外觀圖。 第4圖係化妝模擬裝置的第二實施例之截面圖。 第5圖係本發明化妝模擬裝置的第三實施例之外觀圖。 5 第6圖係化妝模擬裝置的第三實施例之截面圖。 第7圖係本發明化妝模擬裝置的第四實施例之外觀圖。 第8圖係化妝模擬裝置的第四實施例之截面圖。 第9圖係本發明化妝模擬裝置的第五實施例之外觀圖。 第10圖係化妝模擬裝置的第五實施例之截面圖。 10 第11圖係化妝模擬裝置的第五實施例之硬體構造圖。 第12圖係顯示化妝模擬裝置所進行的處理概要之流程 圖。 第13圖係顯示於顯示器之主畫面及顯示於操作面板之 操作晝面的一例之影像圖。 15 第14圖係顯示化妝模擬裝置進行化妝模擬以外的處理 之影像圖。 第15圖係本發明化妝模擬系統的其中一實施例之系統 構造圖。 第16圖係顯示模擬主應用所進行的處理之晝面影像 20 圖。 第17圖係化妝前影像所含有的使用者臉部之影像畫 面0 第18圖係顯示追蹤點的一例之影像圖。 第19圖係化妝處理參數檔的一例之構成圖。 34 200805175 第20圖係顯示口紅處理所參照的追蹤點之影像圖。 第21圖係顯示口紅處理的一例之流程圖。 第22圖係顯示輪廓取得處理的一例之影像圖。 第23圖係輪廓取得用影像600之唇部8點及鼻子3點之 5 追蹤點,與以唇部8點及鼻子3點之追蹤點為基礎進行再搜 尋而取得的輪廓取得用影像601之點的比較晝面。 第24圖係顯示從唇部8點及鼻子3點之追蹤點求得預設 點的處理之一例之影像圖。 藝第25圖係顯示根據點或預設點完成輪廓之處理之影像 10 圖。 第26圖係顯示上色圖產生處理的一例之影像圖。 第27圖係顯示根據上色圖的上色處理之一例之影像 圖。 第28圖係顯示眼影處理所參照的追蹤點之影像圖。 15 第29圖係顯示眼影處理的一例之流程圖。 第30圖係顯示基本輪廓產生處理的一例之影像圖。 • 第31圖係顧示上色輪廓產生處理的一例之影像圖。 第32圖係顯示未進行淡化處理之上色圖、及進行淡化 處理之上色圖的一例之影像圖。 20 第33圖係顯示臉頰處理所參照的追蹤點之影像圖。 第34圖係顯示臉頰處理的一例之流程圖。 第35圖係顯示上色輪廓的一例之影像圖。 第36圖係顯示眉處理所參照的追蹤點之影像圖。 第37圖係顯示眉處理的一例之流程圖。 35 200805175 第38圖係顯示眉輪廓取得處理的一例之影像圖。 第39圖係顯示與變化指定對應之指示曲線製成處理的 一例之影像圖。 第40圖係顯示粉底處理所參照的追蹤點之影像圖。 5 第41圖係顯示粉底處理的一例之流程圖。 第42圖係顯示輪廓的一例之影像晝面。 第43圖係顯示淡化處理前的原本影像、及淡化處理後 的原本影像之一例之影像圖。 φ 【主要元件符號說明】 1...化妝模擬裝置 16...陳列箱 2...照相機 17…1C標籤讀寫器 3...半鏡 20...化妝模擬系統 4...操作面板 21...類比照相機 5...列印機 22... USB擷取元件 6...照明 23...動態影像檔 7...顯示器 24...化妝照相機 8...控制部 25...靜止畫面系統 9...透明板 26...動態影像檔 10…運算處理裝置 27...共用記憶體 11…顯示裝置 28...模擬器主應用程式 12....驅動裝置 29...介面應用程式 13...輔助記憶裝置 31... DirectX 14…記錄媒體 32.··專用 API 15...觸碰式面板顯示器 33...臉部辨識處理部 36 200805175 34...追蹤點 52...動態影像控制物件 35…化妝處理部 5 3...動態影像顯示物件 36...粉底處理部 54...其他控制器 37...眉處理部 100〜in、200〜210、300〜309、 38...眼影處理部 400〜402、500〜502、700〜702、 39... 口紅處理部 900〜903、1001、1002、1ΗΠ、 40...臉頰處理部 200卜 210卜 2102···影像 41...商品資訊 504...上色圖 42...動態影像伺服器 600〜601…輪廓取得用影像 43...臉部辨識處理部 801…化妝處理參數檔 44、45...追蹤點 803…化妝前影像 5 0…ActiveX控制器 804…化妝後影像 51.. .ActiveX viewer 2202、2203···輪廓 37[The drawing is simple and clear; the appearance of the J embodiment. 1 is a cross-sectional view of a first embodiment of a makeup simulation device according to a cosmetic simulation device of the present invention. Fig. 3 is an external view showing a second embodiment of the makeup simulation device of the present invention. Fig. 4 is a cross-sectional view showing a second embodiment of the makeup simulation device. Fig. 5 is an external view showing a third embodiment of the makeup simulation device of the present invention. 5 Fig. 6 is a cross-sectional view showing a third embodiment of the makeup simulation device. Fig. 7 is an external view showing a fourth embodiment of the makeup simulation device of the present invention. Fig. 8 is a cross-sectional view showing a fourth embodiment of the makeup simulation device. Fig. 9 is an external view showing a fifth embodiment of the makeup simulation device of the present invention. Fig. 10 is a cross-sectional view showing a fifth embodiment of the makeup simulation device. 10 Fig. 11 is a hardware configuration diagram of a fifth embodiment of the makeup simulation device. Fig. 12 is a flow chart showing an outline of processing performed by the makeup simulation device. Fig. 13 is an image view showing an example of the main screen of the display and the operation panel displayed on the operation panel. 15 Fig. 14 is a view showing an image of a makeup simulation device performing processing other than makeup simulation. Fig. 15 is a system configuration diagram showing an embodiment of the makeup simulation system of the present invention. Figure 16 is a diagram showing a top view image of the processing performed by the main application. Fig. 17 is an image of a user's face contained in the image before makeup. Fig. 18 is an image showing an example of a tracking point. Fig. 19 is a configuration diagram showing an example of a makeup processing parameter file. 34 200805175 Figure 20 shows an image of the tracking point referenced by the lipstick treatment. Fig. 21 is a flow chart showing an example of lipstick treatment. Fig. 22 is a view showing an image of an example of the contour acquisition processing. Fig. 23 is a contour acquisition image 601 obtained by re-searching based on the 8 points of the lip of the contour acquisition image 600 and the 3 points of the nose 3 points and the tracking point of 8 points of the lip and 3 points of the nose. The comparison of points. Fig. 24 is an image diagram showing an example of a process of obtaining a preset point from a tracking point of 8 points of the lip and 3 points of the nose. Figure 25 shows the image of the contour processed according to the point or preset point. Fig. 26 is a view showing an image of an example of the generation process of the color map. Fig. 27 is a view showing an image of an example of coloring processing according to a color map. Figure 28 shows an image of the tracking point to which the eye shadow processing is referred. 15 Fig. 29 is a flow chart showing an example of eye shadow processing. Fig. 30 is a view showing an image of an example of basic contour generation processing. • Fig. 31 is an image diagram showing an example of the color contour generation processing. Fig. 32 is a view showing an image of a color map which has not been subjected to the desalination process and a color map which has been subjected to the desalination process. 20 Figure 33 shows an image of the tracking point referenced by the cheek processing. Figure 34 is a flow chart showing an example of cheek processing. Fig. 35 is an image diagram showing an example of a colored outline. Figure 36 shows an image of the tracking point referenced by the eyebrow processing. Fig. 37 is a flow chart showing an example of the eyebrow processing. 35 200805175 Figure 38 is an image showing an example of the process of obtaining the eyebrow contour. Fig. 39 is a view showing an image of an example of the processing of the indication curve corresponding to the change designation. Figure 40 shows an image of the tracking point referenced by the foundation treatment. 5 Fig. 41 is a flow chart showing an example of foundation treatment. Fig. 42 is an image showing an example of a contour. Fig. 43 is a view showing an image of an original image before the desalination process and an original image after the desalination process. Φ [Description of main component symbols] 1... Makeup simulation device 16... Showcase 2... Camera 17... 1C tag reader/writer 3... Half mirror 20... Makeup simulation system 4... Operation Panel 21... analog camera 5... printer 22... USB capture component 6... illumination 23... motion image file 7... display 24... makeup camera 8... control Part 25: still picture system 9...transparent board 26...motion picture file 10...calculation processing unit 27...shared memory 11...display device 28...simulator main application 12... Drive device 29...Interface application program 13...Auxiliary memory device 31... DirectX 14...Recording medium 32.·Special API 15...Touch panel display 33...Face recognition processing unit 36 200805175 34...Tracking point 52... Motion picture control object 35... Makeup processing unit 5 3... Motion picture display object 36... Foundation processing unit 54... Other controller 37... Eyebrow processing Parts 100 to in, 200 to 210, 300 to 309, 38...eye shadow processing units 400 to 402, 500 to 502, 700 to 702, 39... lipstick processing units 900 to 903, 1001, 1002, 1 and 40 ... cheek processing department 200 卜 210 2102···Image 41...Product information 504...Coloring map 42...Motion video server 600 to 601...Outline acquisition image 43...Face recognition processing unit 801...Cosmetic processing parameter file 44 , 45... Tracking point 803...Pre-makeup image 5 0...ActiveX controller 804...After makeup image 51.. ActiveX viewer 2202, 2203···Profile 37

Claims (1)

200805175 、申謗專利範圓·· 種化妝贿纟統,# 像進行㈣者,包含有·· 者臉部拍攝後的動態影 部並輸出動態影像 者 攝影機構,係可拍攝使用者臉 10 控制機構,係可接收從前述 & 像,A 攝办機構輸出之動悲影 像亚對則迷動態影像進行影像處 像者, _不機構’射顯*從前述控制 理且輸出者;及 機構輸出之動態影 又 ,前述控制機構更包含有·· 臉部辨識處理機構,係根 a、 影像辨識㈣錢部者;及、錢雜從前述動 化妝處理機構,係根據前述追縱 所含有的使用者臉部進行 機構者。 、 ,並輪出至前述顯 2·如申請專利範圍第1項之化妝模擬系 態 前述動態影像 15 20 統 處理機構至少包含 …u ’其中前述化妝 包含有: 福構中的1者以上,且該等機構 粉底處理機構,係 所含有的❹㈣部進㈣對㈣動態影像 所含 眉處理機構,係相 —' 含有的使用者臉部進;二=縱點對前述動態影像所 38 200805175 臉頰處理機構,係根據前述追縱點對前述動態 所含有的使用者臉部進行臉頰處理者;〗 〜象 眼影處理機構,係根據前述追縱點對前述動離 所含有的使用者臉部進行眼影處理者。 —像 5200805175 、申谤专利范圆·· Kind of makeup bribe system, #像进行(四), including the dynamic image of the face after shooting, and outputting the motion imager camera, which can capture the user's face 10 control The mechanism can receive the image of the moving image from the aforementioned & image, A camera, and the image of the moving image, _ not the mechanism 'imaging* from the aforementioned control and output; and the mechanism output In addition, the control mechanism further includes a face recognition processing mechanism, a root a, an image recognition (4) money department, and a miscellaneous makeup processing mechanism based on the use of the tracking. The person who performs the face on the face. And the rotation of the makeup simulation system according to the first aspect of the patent application is as follows: the motion processing mechanism includes at least one of the following components: wherein the makeup includes one or more of the structures, and The foundation treatment mechanism of these institutions is the (4) part of the (4) pair (4) moving image containing eyebrow processing mechanism, the phase - 'containing the user's face into the body; the second = the vertical point to the aforementioned dynamic imagery 38 200805175 cheek treatment The mechanism is to perform a cheek processing on the face of the user included in the dynamics according to the tracking point; and the eye shadow processing mechanism performs eye shadow processing on the face of the user included in the moving away according to the tracking point. By. - like 5 10 1510 15 20 3.:申,1圍第1項之化妝模擬系統,其中前述化妝 ί理機構可產生根據前述追蹤點對前述動態影像所含 有的^者臉部進行預定化妝之輪廓,並根據該輪廟進 4·如申請專賴圍第1項之化妝模㈣統,其中前述化妝 處理機構係每隔預定時間依序對前述動態影像所含有 的使用者臉部進行多種類之化妝,並將化妝後的影像晝 面輸出至前述顯示機構。 — 5.如申請專利範圍第4項之化妝模擬系統,更包含有儲存 機構,且贿存機射依照每種化_存心化妝成前 述影像畫面之商品資訊, 又噹述化妝處理機構可從前述儲存機構取得用以 化妝成前述影像畫面之商品資訊,並輪出至前述顯示機 構。 6·如申請專利範圍第i項之化妝模擬系統,其中前述化妝 處理機構可將比較晝面輸出至前述顯示機構,且該比較 畫面係由前述動態影像所含有的使用者臉部之化妝前 及化妝後的影像晝面所構成者。 7·如申請專利範圍第1項之化妝模擬系統,更包含有: 讀取機構,係可讀取使用者所選擇的物品之識別資 39 2〇〇8〇5l75 訊者;及 對應機構’係可使前述物品與前述識別資訊對應 者, 又’ W述化妝處理機構可根據前述對應機構特定與 巧取自前述讀取機構之前述識別資訊對應之前述物 ’並將使用該物品對前述動態影像所含有的使用者臉 $進行預^化妝後之影像晝面,輸出至前賴示機構。 種化妝杈擬裝置,係可對使用者臉部拍攝後的動態影 像進行化妝者,包含有: 攝影機構,係可拍攝使用者臉部並輸出動態影像 者; 控制機構’係可接收從前賴影機構輸出之動態影 像’並對前述動態影像進行影像處理且輸出者;及 顯示機構,係可顯示從前述㈣機構輸出之 像者, # μ 又’前述控制機構更包含有·· /臉部辨識處理機構,係根據預定追縱點從前述動態 影像辨識使用者臉部者;及 20 機構者 化妝處理機構’係根據前述追蹤點對前述動態影像 所含有的使用者臉部進行預定㈣,並輸Μ前也顯示 =申請專利_第8項之化妝模㈣置’更包含有設在 4述顯示機構的顯示方向側之半 可在 爾述顯示機構顯示動態影像時,使光從 鏡機構’且該半鏡機構 前述顯示機 40 200805175 構侧透射至使用者側,並在前述顯示機構並未顯示動態 衫像日寸’反射來自前述使用者側之光。 ίο.如申請專利範圍第8項之化妝模擬裝置,更包含有設在 兩述顯示機構的顯示方向側之半鏡機構, 5 又,前述攝影機構係配置於可在光透射過前述半鏡 機構時,拍攝前述使用者臉部之位置上, •而前述半鏡機構可使光從前述顯示機構侧透射至 % $述使用者側,並使光從前述使用者側透射至前述攝影 機構側。 1.如申请專利範圍第8項之化妝模擬裝置,更包含有印刷 機構,且該印刷機構可印刷顯示於前述顯示機構之使用 者臉部化妝後的影像畫面。 12·如申請專利範圍第8項之化妝模擬裝置,更包含有陳列 15 機構,且該陳列機構可陳列用以化妝成顯示於前述顯示 機構之使用者臉部化妝後的影像晝面之商品。 • I3·如申請專利範圍第8項之化妝模擬裝置,更包含有: 讀取機構,係可讀取使用者所選擇的物品之識別資 訊者;及 2 對應機構,係可使前述物品與前述識別資訊對應 者, 又,前述化妝處理機構可根據前述對應機構特定與 5買取自前述讀取機構之前述識別資訊對應之前述物 品,並將使用該物品對前述動態影像所含有的使用者臉 部進行預定化妝後之影像畫面,輸出至前述顯示機構。 41 200805175 10 14. -種化妝触方法,錢料可對使用者 動態影像進行化妝之化妝模擬系統中, 臉部拍攝後的 包含有下列步 :從攝影機構拍攝的動態影像辨識使用者 臉^亚_崎化妝模擬; 耵述控制機構根據預定追蹤點 所含有的使用者臉部;及 像 月』述&制機構根據前料㈣ 含有的使用錢 n〜像所 接預疋化妝,並輸出至前述顯示 構 機 15· —種化妝模擬程式 裝置及輪出裝置之電『設有運算處理裝置、記憶 後的動態讀奸^尋行㈣使用者臉部拍攝 下列步驟:了化妝之程式’且該化妝模擬程式執行 15 20 )述運”處理裝置從攝影 識使用者臉部,並開始進行化妝模擬;動〜像辨 根據預定追縱點辨‘ 者臉部;及 辨識㈣動態影像所含有的使用 根據前述追蹤點對 臉部刪者 4220: The application of the makeup simulation system of the first item of the first aspect, wherein the makeup mechanism can generate a contour of the predetermined makeup on the face of the moving image according to the tracking point, and according to the round temple Into the application of the makeup model (4) of the first item, wherein the makeup processing mechanism sequentially performs various types of makeup on the face of the user contained in the motion picture every predetermined time, and after makeup The image is output to the aforementioned display mechanism. - 5. The makeup simulation system of claim 4 of the patent application scope further includes a storage mechanism, and the bribe storage machine is configured according to the product information of each of the images, and the makeup processing mechanism can be described above. The storage unit obtains product information for making up the image of the image and rotates to the display unit. 6. The makeup simulation system of claim i, wherein the makeup processing mechanism outputs a comparison face to the display mechanism, and the comparison picture is before the makeup of the user's face contained in the motion picture and The image of the image after makeup. 7. The makeup simulation system of claim 1 of the patent scope further includes: a reading mechanism that can read the identification of the item selected by the user 39 2〇〇8〇5l75; and the corresponding institution The article may be associated with the identification information, and the makeup processing mechanism may specify the object corresponding to the identification information that is manually taken from the reading mechanism according to the corresponding mechanism, and use the article to the motion image. The user's face contained in the user's face is pre-painted and outputted to the front display mechanism. The makeup simulation device can perform makeup on the moving image after the user's face is photographed, including: a photographing mechanism, which can photograph the user's face and output a dynamic image; the control mechanism can receive the former image The dynamic image output by the mechanism 'and the image processing and output of the moving image; and the display mechanism can display the image output from the (4) mechanism, # μ and the above control mechanism further includes ···face recognition The processing mechanism identifies the user's face from the moving image according to the predetermined tracking point; and 20 the body makeup processing mechanism 'predetermines (4) the user's face included in the moving image according to the tracking point, and loses Also displayed in front of the = 申请 申请 申请 申请 申请 申请 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 第 且 且 且The half mirror mechanism of the display machine 40 200805175 is transmitted to the user side, and the display mechanism does not display the dynamic shirt image The light from the user side. Ίο. The makeup simulation device of claim 8 further includes a half mirror mechanism disposed on a display direction side of the two display mechanisms, and wherein the camera mechanism is disposed to transmit light through the half mirror mechanism At the time of photographing the face of the user, the half mirror mechanism transmits light from the display mechanism side to the user side, and transmits light from the user side to the photographing mechanism side. 1. The makeup simulation device of claim 8, further comprising a printing mechanism, wherein the printing mechanism prints an image of the image displayed on the face of the user of the display device. 12. The makeup simulation device of claim 8 further comprising a display 15 mechanism, wherein the display mechanism can display a product for making up the image of the face of the user's face after being displayed on the display device. • I3. The makeup simulation device of claim 8 further includes: a reading mechanism that reads an identification information of an item selected by the user; and 2 a corresponding mechanism that enables the aforementioned item to be In addition, the makeup processing means may specify the item corresponding to the identification information of the reading means in accordance with the corresponding mechanism, and use the item to face the user's face included in the moving image. The image screen after the predetermined makeup is performed is output to the display means. 41 200805175 10 14. - A kind of makeup touch method, in which the money material can be used to make a make-up simulation system for the user's motion image. After the face shot, the following steps are included: the user's face is recognized from the motion picture captured by the camera mechanism. _Saki makeup simulation; narration control mechanism according to the user's face contained in the scheduled tracking point; and the month-by-month description & The display machine 15 is a type of makeup simulation program device and a rotation device. "The arithmetic processing device is provided, and the dynamic reading after the memory is searched. (4) The user's face is photographed in the following steps: a makeup program' and The makeup simulation program executes 15 20) The processing device recognizes the user's face from the photography and starts the makeup simulation; the motion recognition determines the face of the person according to the predetermined tracking point; and recognizes (4) the use of the motion image. According to the aforementioned tracking point, the face is deleted 42
TW96101598A 2006-01-17 2007-01-16 Make-up simulation system, make-up simulation method, make-up simulation method and make-up simulation program TWI421781B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006009268 2006-01-17
JP2007005098A JP5191665B2 (en) 2006-01-17 2007-01-12 Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program

Publications (2)

Publication Number Publication Date
TW200805175A true TW200805175A (en) 2008-01-16
TWI421781B TWI421781B (en) 2014-01-01

Family

ID=38493846

Family Applications (1)

Application Number Title Priority Date Filing Date
TW96101598A TWI421781B (en) 2006-01-17 2007-01-16 Make-up simulation system, make-up simulation method, make-up simulation method and make-up simulation program

Country Status (2)

Country Link
JP (1) JP5191665B2 (en)
TW (1) TWI421781B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI554951B (en) * 2014-06-17 2016-10-21 北京京東尚科信息技術有限公司 Apparatus and method for rendering virtual try-on
CN108875462A (en) * 2017-05-16 2018-11-23 丽宝大数据股份有限公司 Eyebrow moulding guidance device and its method

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4924376B2 (en) * 2007-11-19 2012-04-25 大日本印刷株式会社 Makeup recording distribution system
CN103885461B (en) * 2012-12-21 2017-03-01 宗经投资股份有限公司 Automatically the moving method of the color make-up instrument of color make-up machine
JP6008323B2 (en) 2013-02-01 2016-10-19 パナソニックIpマネジメント株式会社 Makeup support device, makeup support method, and makeup support program
WO2014171142A1 (en) 2013-04-17 2014-10-23 パナソニックIpマネジメント株式会社 Image processing method and image processing device
US9953462B2 (en) 2014-01-31 2018-04-24 Empire Technology Development Llc Augmented reality skin manager
JP6334715B2 (en) * 2014-01-31 2018-05-30 エンパイア テクノロジー ディベロップメント エルエルシー Augmented Reality Skin Evaluation
WO2015116186A1 (en) 2014-01-31 2015-08-06 Empire Technology Development, Llc Evaluation of augmented reality skins
JP6205498B2 (en) 2014-01-31 2017-09-27 エンパイア テクノロジー ディベロップメント エルエルシー Target person-selectable augmented reality skin
WO2017149778A1 (en) * 2016-03-04 2017-09-08 株式会社オプティム Mirror, image display method, and program
CN109583261A (en) * 2017-09-28 2019-04-05 丽宝大数据股份有限公司 Biological information analytical equipment and its auxiliary ratio are to eyebrow type method
TWI708164B (en) * 2019-03-13 2020-10-21 麗寶大數據股份有限公司 Virtual make-up system and virtual make-up coloring method
WO2023089816A1 (en) * 2021-11-22 2023-05-25 日本電気株式会社 Information processing device, information processing system, information processing method, and non-transitory computer-readable medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4539585A (en) * 1981-07-10 1985-09-03 Spackova Daniela S Previewer
JP2001104050A (en) * 1999-10-06 2001-04-17 Seiko Epson Corp Makeup support method using image processing, dressing table, and recording medium having makeup support processing program using image processing recorded
ATE458232T1 (en) * 2000-06-27 2010-03-15 Rami Orpaz MAKEUP AND FASHION JEWELRY DISPLAY METHOD AND SYSTEM
FR2818529A1 (en) * 2000-12-21 2002-06-28 Oreal METHOD FOR DETERMINING A DEGREE OF A BODY TYPE CHARACTERISTIC
JP3984191B2 (en) * 2002-07-08 2007-10-03 株式会社東芝 Virtual makeup apparatus and method
JP2004234571A (en) * 2003-01-31 2004-08-19 Sony Corp Image processor, image processing method and photographing device
TWI227444B (en) * 2003-12-19 2005-02-01 Inst Information Industry Simulation method for make-up trial and the device thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI554951B (en) * 2014-06-17 2016-10-21 北京京東尚科信息技術有限公司 Apparatus and method for rendering virtual try-on
CN108875462A (en) * 2017-05-16 2018-11-23 丽宝大数据股份有限公司 Eyebrow moulding guidance device and its method

Also Published As

Publication number Publication date
JP5191665B2 (en) 2013-05-08
TWI421781B (en) 2014-01-01
JP2007216000A (en) 2007-08-30

Similar Documents

Publication Publication Date Title
TW200805175A (en) Makeup simulation system, makeup simulation device, makeup simulation method and makeup simulation program
KR101363691B1 (en) Makeup simulation system, makeup simulation device, makeup simulation method, and makeup simulation program
US11450075B2 (en) Virtually trying cloths on realistic body model of user
RU2668408C2 (en) Devices, systems and methods of virtualising mirror
CN101779218B (en) Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program
US20180278879A1 (en) System and method for digital makeup mirror
US20100189357A1 (en) Method and device for the virtual simulation of a sequence of video images
CN110390632B (en) Image processing method and device based on dressing template, storage medium and terminal
CN108765273A (en) The virtual lift face method and apparatus that face is taken pictures
JP5261586B2 (en) Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program
JPH10255066A (en) Face image correcting method, makeup simulating method, makeup method, makeup supporting device, and foundation transfer film
US20040110113A1 (en) Tool and method of making a tool for use in applying a cosmetic
WO2018005884A1 (en) System and method for digital makeup mirror
WO2014081394A1 (en) Method, apparatus and system for virtual clothes modelling
KR20200024105A (en) Computer graphics synthesis system and method thereof
JP2009039523A (en) Terminal device to be applied for makeup simulation
CN109191393A (en) U.S. face method based on threedimensional model
JP7463774B2 (en) MAKEUP SIMULATION DEVICE, MAKEUP SIMULATION METHOD, AND PROGRAM
JP4219521B2 (en) Matching method and apparatus, and recording medium
JP2011022733A (en) Device and program for simulating makeup, and counter selling support method
KR100422470B1 (en) Method and apparatus for replacing a model face of moving image
FR2920938A1 (en) Image simulating method for beauty industry, involves deforming parametric models to adapt to contours of features on face, and detecting and analyzing cutaneous structure of areas of face by maximizing gradient flow of brightness
JP6969622B2 (en) Shooting game equipment and programs
BR112016002493B1 (en) METHOD FOR PERFORMING COLOR CHANGE IN AN OBJECT WITHIN A DIGITAL IMAGE

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees