TW201945898A - Information processing device and program - Google Patents

Information processing device and program Download PDF

Info

Publication number
TW201945898A
TW201945898A TW108112650A TW108112650A TW201945898A TW 201945898 A TW201945898 A TW 201945898A TW 108112650 A TW108112650 A TW 108112650A TW 108112650 A TW108112650 A TW 108112650A TW 201945898 A TW201945898 A TW 201945898A
Authority
TW
Taiwan
Prior art keywords
sight
image
line
user
area
Prior art date
Application number
TW108112650A
Other languages
Chinese (zh)
Inventor
河原純一郎
薮崎智穂
小林恵子
今井史
Original Assignee
日商資生堂股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日商資生堂股份有限公司 filed Critical 日商資生堂股份有限公司
Publication of TW201945898A publication Critical patent/TW201945898A/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A45HAND OR TRAVELLING ARTICLES
    • A45DHAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
    • A45D44/00Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This information processing device is provided with: a means for presenting an image including a face to a user; a means for acquiring gaze information relating to movement of the gaze of the user; a means for calculating unevenness in the gaze of the user in the image, on the basis of the gaze information; a means for determining recommendation information relating to the image, on the basis of a gaze distribution; and a means for presenting the recommendation information.

Description

資訊處理裝置及程式Information processing device and program

本發明係關於一種資訊處理裝置及程式。The invention relates to an information processing device and a program.

近年來,隨著化妝品陣容之增加,有化妝品銷售店鋪之擔當者或消費者難以選擇符合消費者喜好之化妝品的傾向。In recent years, with the increase in the lineup of cosmetics, there is a tendency for persons in charge of cosmetics sales shops or consumers to choose cosmetics that meet consumer preferences.

因此,藉由在電腦上執行使用了消費者之臉部圖像之化妝模擬而產生化妝模擬圖像的技術越來越普及(例如,日本再表2015-029371號公報)。Therefore, a technique for generating a makeup simulation image by executing a makeup simulation using a face image of a consumer on a computer is becoming more and more popular (for example, Japanese Re-watch Publication No. 2015-029371).

[發明所欲解決之問題][Problems to be solved by the invention]

然而,人類對於某些事象,即便當時感覺符合自身之喜好,亦有隨後感覺該事象不符合喜好之情況。其係因為人之喜好依存於人之主觀性,而人之主觀性並非特定。However, even if human beings feel that they are in line with their own preferences at the time, they may feel that they are not in accordance with their preferences. It is because human preferences depend on human subjectivity, and human subjectivity is not specific.

例如,即便由消費者自藉由再表2015-029371號公報之技術產生之複數張化妝模擬圖像中,選擇符合自身喜好之化妝模擬圖像,亦有隨後感覺選擇之化妝模擬圖像不符合自身喜好之情形、或感覺未選擇之化妝模擬圖像更符合自身喜好之情形。For example, even if a makeup simulation image selected by the consumer by using the technique of the re-published publication No. 2015-029371 selects a makeup simulation image that conforms to his or her preferences, there is a feeling that the selected makeup simulation image does not match The situation of your own preference, or the makeup simulation image that feels unselected is more in line with your preference.

如此,人對觀察之臉部所持之評估反映了該人之主觀性。因此,難以提示人對觀察之臉部之潛在評估。In this way, the person's assessment of the observed face reflects the person's subjectivity. Therefore, it is difficult to prompt people for a potential assessment of the face being observed.

本發明之目的在於提示使用者對使用者觀察之臉部之潛在評估。
[解決問題之技術手段]
The purpose of the present invention is to prompt the user to a potential assessment of the face the user observes.
[Technical means to solve the problem]

本發明之一態樣係一種資訊處理裝置,其具備對使用者提示包含臉部之圖像之機構;
取得與使用者之視線移動相關之視線資訊之機構;
基於視線資訊,計算圖像中之使用者之視線分佈之機構;
基於視線分佈,決定與圖像相關之建議資訊之機構;及
提示建議資訊之機構。
[發明之效果]
An aspect of the present invention is an information processing device having a mechanism for prompting a user with an image including a face;
An organization that obtains sight information related to the sight movement of the user;
Based on the sight information, a mechanism for calculating the sight distribution of users in the image;
Organizations that determine the recommended information related to the image based on the line of sight distribution; and organizations that suggest the information.
[Effect of the invention]

根據本發明,可提示使用者對使用者觀察之臉部之潛在評估。According to the present invention, a user can be prompted for a potential assessment of a face that the user observes.

以下,基於圖式對本發明之一實施形態詳細地進行說明。另,用以說明實施形態之圖式中,對同一之構成要素原則上標註同一符號而省略其重複之說明。Hereinafter, one embodiment of the present invention will be described in detail based on the drawings. In addition, in the drawings for explaining the embodiment, the same constituent elements are denoted by the same symbols in principle, and redundant descriptions are omitted.

(1)第1實施形態
對本發明之第1實施形態進行說明。
(1) First Embodiment A first embodiment of the present invention will be described.

(1-1)資訊處理系統之構成
對資訊處理系統之構成進行說明。圖1~圖2係顯示第1實施形態之資訊處理系統之構成的方塊圖。
(1-1) Structure of Information Processing System The structure of the information processing system will be described. 1 to 2 are block diagrams showing the configuration of the information processing system of the first embodiment.

如圖1~圖2所示,資訊處理系統1具備:用戶端裝置10、眼跟蹤器20、及伺服器30。
用戶端裝置10及伺服器30經由網路(例如網際網路或內部網路)NW連接。
眼跟蹤器20連接於用戶端裝置10。
As shown in FIGS. 1 to 2, the information processing system 1 includes a client device 10, an eye tracker 20, and a server 30.
The client device 10 and the server 30 are connected via a network (such as the Internet or an intranet) NW.
The eye tracker 20 is connected to the client device 10.

用戶端裝置10為向伺服器30發送請求之資訊處理裝置之一例。用戶端裝置10為例如智慧型手機、平板終端、或個人電腦。
使用者U可對用戶端裝置10賦予使用者指示。
The client device 10 is an example of an information processing device that sends a request to the server 30. The client device 10 is, for example, a smart phone, a tablet terminal, or a personal computer.
The user U can give a user instruction to the client device 10.

眼跟蹤器20構成為檢測使用者U之視線移動,且產生與視線移動相關之眼跟蹤信號(「視線資訊之一例」)。具體而言,眼跟蹤器20每隔特定時間計測對使用者之視線位置加以顯示的座標。
眼跟蹤器20將眼跟蹤信號發送至用戶端裝置10。
The eye tracker 20 is configured to detect the line of sight movement of the user U and generate an eye tracking signal related to the line of sight movement ("an example of line of sight information"). Specifically, the eye tracker 20 measures the coordinates showing the position of the user's line of sight at specific time intervals.
The eye tracker 20 sends an eye tracking signal to the client device 10.

伺服器30為將與用戶端裝置10發送之請求相應之回應提供給用戶端裝置10之資訊處理裝置之一例。伺服器30為例如Web伺服器。The server 30 is an example of an information processing device that provides a response corresponding to a request sent by the client device 10 to the client device 10. The server 30 is, for example, a web server.

(1-1-1)用戶端裝置之構成
參照圖1,對用戶端裝置10之構成進行說明。
(1-1-1) Configuration of Client Device Referring to FIG. 1, the configuration of the client device 10 will be described.

如圖1所示,用戶端裝置10具備:記憶裝置11、處理器12、輸入輸出介面13、通信介面14、顯示器15、及相機16。As shown in FIG. 1, the client device 10 includes a memory device 11, a processor 12, an input / output interface 13, a communication interface 14, a display 15, and a camera 16.

記憶裝置11構成為記憶程式及資料。記憶裝置11為例如ROM(Read Only Memory:唯讀記憶體)、RAM(Random Access Memory:隨機存取記憶體)、及儲存器(例如快閃記憶體或硬碟)之組合。The memory device 11 is configured to memorize programs and data. The memory device 11 is, for example, a combination of a ROM (Read Only Memory), a RAM (Random Access Memory), and a memory (such as a flash memory or a hard disk).

程式包含例如以下之程式。
·OS(Operating System:操作系統)之程式
·執行資訊處理之應用(例如化妝模擬用應用)程式
The programs include, for example, the following programs.
· OS (Operating System) programs · Applications that perform information processing (such as makeup simulation applications) programs

資料包含例如以下之資料。
·資訊處理中供參照之資料庫
·藉由執行資訊處理而獲得之資料(即,執行資訊處理之結果)
The data includes, for example, the following data.
· Data base for reference in information processing · Data obtained by performing information processing (ie, results of performing information processing)

處理器12構成為藉由啟動記憶於記憶裝置11之程式而實現用戶端裝置10之功能。處理器12為電腦之一例。The processor 12 is configured to implement a function of the client device 10 by activating a program stored in the memory device 11. The processor 12 is an example of a computer.

輸入輸出介面13構成為自連接於用戶端裝置10之輸入器件取得使用者之指示,並將資訊輸出至連接於用戶端裝置10之輸出器件,且自眼跟蹤器20取得眼跟蹤信號。
輸入器件為例如鍵盤、指向裝置、觸控面板、或其等之組合。
輸出器件為例如顯示器15。
The input / output interface 13 is configured to obtain a user's instruction from an input device connected to the client device 10, output information to an output device connected to the client device 10, and obtain an eye tracking signal from the eye tracker 20.
The input device is, for example, a keyboard, a pointing device, a touch panel, or a combination thereof.
The output device is, for example, the display 15.

通信介面14構成為控制用戶端裝置10與伺服器30間之通信。The communication interface 14 is configured to control communication between the client device 10 and the server 30.

顯示器15構成為顯示由處理器12產生之圖像。The display 15 is configured to display an image generated by the processor 12.

相機16構成為拍攝圖像(例如用戶端裝置10之使用者之臉部圖像)。The camera 16 is configured to capture an image (for example, a face image of a user of the client device 10).

(1-1-2)伺服器之構成
參照圖1對伺服器30之構成進行說明。
(1-1-2) Configuration of Server The configuration of the server 30 will be described with reference to FIG. 1.

如圖1所示,伺服器30具備:記憶裝置31、處理器32、輸入輸出介面33、及通信介面34。As shown in FIG. 1, the server 30 includes a memory device 31, a processor 32, an input / output interface 33, and a communication interface 34.

記憶裝置31構成為記憶程式及資料。記憶裝置31為例如ROM、RAM、及儲存器(例如快閃記憶體或硬碟)之組合。The memory device 31 is configured to memorize programs and data. The memory device 31 is a combination of, for example, ROM, RAM, and storage (such as a flash memory or a hard disk).

程式包含例如以下之程式。
·OS程式
·執行資訊處理之應用程式
The programs include, for example, the following programs.
· OS program · Application program for information processing

資料包含例如以下之資料。
·資訊處理中供參照之資料庫
·資訊處理之執行結果
The data includes, for example, the following data.
· Database for reference in information processing · Results of information processing

處理器32構成為藉由啟動記憶於記憶裝置31之程式而實現伺服器30之功能。處理器32為電腦之一例。The processor 32 is configured to implement a function of the server 30 by activating a program stored in the memory device 31. The processor 32 is an example of a computer.

輸入輸出介面33構成為自連接於伺服器30之輸入器件取得使用者之指示,且將資訊輸出至連接於伺服器30之輸出器件。
輸入器件為例如鍵盤、指向裝置、觸控面板、或其等之組合。
輸出器件為例如顯示器。
The input / output interface 33 is configured to obtain a user's instruction from an input device connected to the server 30 and output information to an output device connected to the server 30.
The input device is, for example, a keyboard, a pointing device, a touch panel, or a combination thereof.
The output device is, for example, a display.

通信介面34構成為控制伺服器30與用戶端裝置10之間之通信。The communication interface 34 is configured to control communication between the server 30 and the client device 10.

(1-2)第1實施形態之概要
對第1實施形態之概要進行說明。圖3係第1實施形態之概要之說明圖。
(1-2) Outline of the First Embodiment The outline of the first embodiment will be described. FIG. 3 is an explanatory diagram of the outline of the first embodiment.

如圖3所示,第1實施形態中,提示包含人(例如使用者U)之臉部之模擬圖像SIMG。模擬圖像SIMG中,如鏡像所示,使用者U之右眼RE配置於所向之右側,且,使用者U之左眼LE配置於所向之左側。
接著,產生與觀察模擬圖像SIMG之期間之使用者U之視線移動相關之眼跟蹤信號。
接著,基於眼跟蹤信號,計算構成模擬圖像SIMG之像素之座標空間(以下稱為「圖像空間」)中,位於所向之右側(即包含右眼RE)之第1區域GR1之視線分佈、與位於所向之左側(即包含左眼LE)之第2區域GR2之視線分佈。
接著,基於第1區域GR1之視線分佈、與第2區域GR2之視線分佈,提供與使用者U對模擬圖像SIMG之潛在評估相關之建議資訊。
As shown in FIG. 3, in the first embodiment, a simulation image SIMG including a face of a person (for example, user U) is presented. In the simulated image SIMG, as shown in the mirror image, the right eye RE of the user U is disposed to the right, and the left eye LE of the user U is disposed to the left.
Next, an eye-tracking signal related to the line-of-sight movement of the user U during the observation of the simulated image SIMG is generated.
Next, based on the eye-tracking signal, the line-of-sight distribution of the first region GR1 located to the right (that is, including the right eye RE) in the coordinate space (hereinafter referred to as "image space") of the pixels constituting the simulated image SIMG is calculated. , And the sight line distribution of the second area GR2 located on the left side (including the left eye LE).
Then, based on the line-of-sight distribution of the first region GR1 and the line-of-sight distribution of the second region GR2, suggestion information related to the potential evaluation of the simulated image SIMG by the user U is provided.

如此,第1實施形態中,基於1張模擬圖像SIMG之圖像空間中之使用者U之視線分佈(即,視線偏移),提示反映了使用者U對模擬圖像SIMG之潛在評估之建議資訊。Thus, in the first embodiment, based on the line-of-sight distribution (ie, line-of-sight shift) of the user U in the image space of one simulated image SIMG, the prompt reflects the potential evaluation of the simulated image SIMG by the user U. Suggested information.

(1-3)資料庫
對第1實施形態之資料庫進行說明。以下之資料庫記憶於記憶裝置31。
(1-3) Database The database of the first embodiment will be described. The following database is stored in the memory device 31.

(1-3-1)使用者資訊資料庫
對第1實施形態之使用者資訊資料庫進行說明。圖4係顯示第1實施形態之使用者資訊資料庫之資料構造的圖。
(1-3-1) User information database The user information database of the first embodiment will be described. FIG. 4 is a diagram showing a data structure of a user information database in the first embodiment.

於圖4之使用者資訊資料庫儲存有與使用者相關之使用者資訊。
使用者資訊資料庫包含「使用者ID」欄位、「使用者名稱」欄位、「使用者圖像」欄位、及「使用者屬性」欄位。各欄位彼此建立關聯。
The user information database in FIG. 4 stores user information related to the user.
The user information database includes a "user ID" field, a "user name" field, a "user image" field, and a "user attribute" field. The fields are related to each other.

於「使用者ID」欄位儲存有識別使用者之使用者ID(「使用者識別資訊」之一例)。A user ID (an example of "user identification information") identifying a user is stored in the "user ID" field.

於「使用者名稱」欄位儲存有與使用者之名稱相關之資訊(例如文字)。Information (such as text) related to the user's name is stored in the "User name" field.

於「使用者圖像」欄位儲存有包含使用者之臉部之圖像之圖像資料。The "user image" field stores image data including an image of the user's face.

於「使用者屬性」欄位包含「性別」欄位、「年齡」欄位、及「職業」欄位。The "user attribute" field includes a "gender" field, an "age" field, and a "occupation" field.

於「性別」欄位儲存與使用者之性別相關之資訊。Store information related to the user ’s gender in the "Gender" field.

於「年齡」欄位儲存與使用者之年齡相關之資訊。Store age-related information in the "Age" field.

於「職業」欄位儲存與使用者之職業相關之資訊。Store information related to the user's occupation in the "Occupation" field.

(1-3-2)化妝圖案資訊主表
對第1實施形態之化妝圖案資訊主表進行說明。圖5係顯示第1實施形態之化妝圖案資訊主表之資料構造的圖。
(1-3-2) Makeup pattern information master table The makeup pattern information master table of the first embodiment will be described. FIG. 5 is a diagram showing a data structure of a makeup pattern information master table according to the first embodiment.

於圖5之化妝圖案資訊主表記憶有與化妝圖案相關之化妝圖案資訊。
化妝圖案資訊主表包含「圖案ID」欄位、「圖案名稱」欄位、及「項目」欄位。
The makeup pattern information master table in FIG. 5 stores makeup pattern information related to the makeup pattern.
The makeup pattern information master table includes a "pattern ID" field, a "pattern name" field, and a "item" field.

於「圖案ID」欄位儲存識別化妝圖案之化妝圖案ID(「化妝圖案識別資訊」之一例)。A makeup pattern ID (an example of "makeup pattern identification information") for identifying the makeup pattern is stored in the "pattern ID" field.

於「圖案名稱」欄位儲存有與化妝圖案之圖案名稱相關之資訊(例如文本)。Information (such as text) related to the pattern name of the makeup pattern is stored in the "Pattern name" field.

於「項目」欄位儲存與化妝模擬中使用之化妝項目相關之項目資訊。「項目」欄位包含「類別」欄位、「項目ID」欄位、及「參數」欄位。「類別」欄位、「項目ID」欄位、及「參數」欄位彼此建立關聯。The item information related to the makeup items used in the makeup simulation is stored in the "Item" field. The "Item" field contains a "Category" field, an "Item ID" field, and a "Parameter" field. The "category" field, the "item ID" field, and the "parameter" field are related to each other.

於「類別」欄位儲存與化妝項目之類別(例如粉餅、口紅、及腮紅)相關之資訊。In the "Category" field, store information related to the category of makeup items (such as powder, lipstick, and blush).

於「項目ID」欄位儲存識別化妝項目之項目ID(「化妝項目識別資訊」之一例)。An item ID (an example of "makeup item identification information") identifying a makeup item is stored in the "item ID" field.

於「參數」欄位儲存與使用化妝項目時之效果相關之化妝參數。化妝參數為例如應對原始圖像(作為一例為使用者U之臉部圖像)應用之圖像處理參數(例如,用以轉換原始圖像之像素之顏色資訊之顏色轉換參數)。In the "Parameter" field, the makeup parameters related to the effect when using the makeup items are stored. The makeup parameters are, for example, image processing parameters (for example, color conversion parameters used to convert color information of pixels of the original image) applied to the original image (as an example, the face image of the user U).

(1-3-3)模擬日誌資訊資料庫
對第1實施形態之模擬日誌資訊資料庫進行說明。圖6係顯示第1實施形態之模擬日誌資訊資料庫之資料構造的圖。
(1-3-3) Simulation log information database The simulation log information database of the first embodiment will be described. FIG. 6 is a diagram showing a data structure of the analog log information database of the first embodiment.

於圖6之模擬日誌資訊資料庫按時間序列記憶有與化妝模擬之結果相關之資訊。
模擬日誌資訊資料庫包含「模擬ID」欄位、「模擬執行日」欄位、「模擬條件」欄位、「眼跟蹤信號」欄位、及「視線分佈」欄位。
模擬日誌資訊資料庫與使用者ID建立關聯。
The simulation log information database in FIG. 6 stores information related to the results of the makeup simulation in time series.
The simulation log information database includes a "Simulation ID" field, a "Simulation execution day" field, a "Simulation condition" field, an "Eye tracking signal" field, and a "Sight distribution" field.
The simulation log information database is associated with the user ID.

於「模擬ID」欄位儲存有識別化妝模擬之模擬ID(「模擬識別資訊」之一例)。In the "Simulation ID" field, a simulation ID (an example of "Simulation Identification Information") identifying the makeup simulation is stored.

於「模擬執行日」欄位儲存有與化妝模擬之執行日相關之資訊。Information related to the execution day of the makeup simulation is stored in the "Simulation execution day" field.

於「模擬條件」欄位儲存有與化妝模擬之模擬條件相關之資訊。「模擬條件」欄位包含「原始圖像」欄位、與「圖案」欄位。Information related to the simulation conditions of makeup simulation is stored in the "Simulation Conditions" field. The "Simulation conditions" field includes a "original image" field and a "pattern" field.

於「原始圖像」欄位儲存有成為化妝模擬之對象之原始圖像(例如使用者U之臉部圖像)之圖像資料。The "original image" field stores image data of the original image (for example, the face image of the user U) that is the subject of the makeup simulation.

於「圖案」欄位儲存有對「原始圖像」欄位之圖像資料應用之化妝圖案之化妝圖案ID。The makeup pattern ID of the makeup pattern applied to the image data in the "original image" field is stored in the "pattern" field.

於「眼跟蹤信號」欄位儲存有由眼跟蹤器20產生之眼跟蹤信號。「眼跟蹤信號」欄位包含「計測座標」欄位、與「計測時間」欄位。「計測座標」欄位及「計測時間」欄位彼此建立關聯。An eye tracking signal generated by the eye tracker 20 is stored in the “eye tracking signal” field. The "eye tracking signal" field includes a "measurement coordinate" field and a "measurement time" field. The "measurement coordinates" field and the "measurement time" field are related to each other.

於「計測座標」欄位儲存有顯示圖像空間中之使用者之視線位置之計測座標。In the "Measurement Coordinates" field, measurement coordinates showing the position of the line of sight of the user in the image space are stored.

於「計測時間」欄位儲存有與眼跟蹤器20產生眼跟蹤信號之時序相關之計測時間資訊。計測時間資訊為例如自眼跟蹤器20開始產生眼跟蹤信號之時點經過之時間。
藉由組合「計測座標」欄位之座標、及「計測時間」欄位之資訊,而特定使用者U之視線之移動。
Measured time information related to the timing of the eye tracking signal generated by the eye tracker 20 is stored in the "Measured time" field. The measurement time information is, for example, the time that has elapsed since the eye tracker 20 started generating the eye tracking signal.
By combining the coordinates of the "measurement coordinates" field and the information of the "measurement time" field, the sight of a specific user U moves.

於「視線分佈」欄位儲存有自「眼跟蹤信號」欄位之資訊導出之視線分佈(「視線參數」之一例)。視線分佈顯示化妝模擬圖像之圖像空間中使用者之視線分佈。「視線分佈」欄位包含「R分佈」欄位、與「L分佈」欄位。A line-of-sight distribution derived from the information of the "eye tracking signal" field is stored in the "line-of-sight distribution" field (an example of "line-of-sight parameter"). The line-of-sight distribution shows the line-of-sight distribution of the user in the image space of the makeup simulation image. The "line-of-sight distribution" field includes an "R distribution" field and an "L distribution" field.

於「R分佈」欄位儲存有化妝模擬圖像之圖像空間中,對於化妝模擬圖像,於所向之右側(即,觀察者所向之右側)之第1區域之使用者之視線分佈率。In the image space in which the makeup simulation image is stored in the "R distribution" field, for the makeup simulation image, the user's line of sight in the first area to the right (that is, to the right of the observer) is distributed. rate.

於「L分佈」欄位儲存有化妝模擬圖像之圖像空間中,對於化妝模擬圖像,於所向之左側(即,觀察者所向之左側)之第2區域之使用者之視線分佈率。In the image space in which the makeup simulation image is stored in the "L distribution" field, for the makeup simulation image, the user's line of sight in the second area to the left (that is, to the left of the observer) is distributed. rate.

(1-4)資訊處理
對第1實施形態之資訊處理進行說明。圖7係第1實施形態之資訊處理之流程圖。圖8~圖10係顯示圖7之資訊處理中顯示之畫面例之圖。圖11係圖7之視線參數計算之詳細流程圖。圖12係圖7之視線參數計算之說明圖。
(1-4) Information processing The information processing of the first embodiment will be described. FIG. 7 is a flowchart of information processing in the first embodiment. 8 to 10 are diagrams showing examples of screens displayed in the information processing of FIG. 7. FIG. 11 is a detailed flowchart of the calculation of the line-of-sight parameters of FIG. 7. FIG. 12 is an explanatory diagram of calculation of a line of sight parameter of FIG. 7.

如圖7所示,用戶端裝置10執行原始圖像之決定(S100)。
具體而言,處理器12將畫面P100(圖8)顯示於顯示器15。
As shown in FIG. 7, the client device 10 executes the determination of the original image (S100).
Specifically, the processor 12 displays a screen P100 (FIG. 8) on the display 15.

畫面P100包含圖像對象IMG100。
圖像對象IMG100包含使用者U之臉部圖像。圖像對象IMG100為例如以下之任一者。
·預先記憶於記憶裝置11之圖像
·根據使用者資訊資料庫(圖4)之「使用者圖像」欄位之圖像資料再現之圖像
·由連接於輸入輸出介面13之相機16拍攝之圖像
The picture P100 includes an image object IMG100.
The image object IMG100 contains an image of the face of the user U. The image object IMG100 is, for example, any of the following.
· Images stored in the memory device 11 in advance · Images reproduced from the image data in the "User Image" field of the user information database (Figure 4) · Photographed by the camera 16 connected to the input / output interface 13 Image

於步驟S100後,用戶端裝置10執行化妝模擬圖像之產生(S101)。
具體而言,處理器12特定化妝圖案資訊主表(圖5)之任意記錄。
處理器12對步驟S100中決定之模擬圖像SIMG,應用特定之記錄之「參數」欄位之資訊,藉此產生化妝模擬圖像。
After step S100, the client device 10 performs generation of a makeup simulation image (S101).
Specifically, the processor 12 specifies an arbitrary record of the makeup pattern information master table (FIG. 5).
The processor 12 applies the specific recorded information of the "parameter" field to the simulated image SIMG determined in step S100, thereby generating a makeup simulated image.

於步驟S101後,用戶端裝置10執行化妝模擬圖像之提示(S102)。
具體而言,處理器12將畫面P101(圖8)顯示於顯示器15。
After step S101, the client device 10 executes the prompt of the makeup simulation image (S102).
Specifically, the processor 12 displays a screen P101 (FIG. 8) on the display 15.

畫面P101包含圖像對象IMG101。
圖像對象IMG101為步驟S101中產生之化妝模擬圖像。
The screen P101 includes an image object IMG101.
The image object IMG101 is a makeup simulation image generated in step S101.

於步驟S102後,用戶端裝置10執行視線參數之計算(S103)。After step S102, the client device 10 performs calculation of the sight parameters (S103).

如圖11所示,用戶端裝置10執行眼跟蹤信號之取得(S1030)。
具體而言,處理器12自眼跟蹤器20取得眼跟蹤信號。
處理器12基於眼跟蹤信號將計測座標與計測時間建立關聯並記憶於記憶裝置11。
As shown in FIG. 11, the client device 10 obtains an eye tracking signal (S1030).
Specifically, the processor 12 obtains an eye tracking signal from the eye tracker 20.
The processor 12 associates the measurement coordinates with the measurement time based on the eye tracking signal and stores the measurement coordinates in the storage device 11.

於步驟S1030後,用戶端裝置10執行圖像空間之分割(S1031)。
具體而言,處理器12如圖12所示,計算圖像空間之基準線(例如將連結雙眼之線之中間點、與鼻子之中心連結的線)IRL的座標。
處理器12以基準線IRL為邊界線,將模擬圖像SIMG之圖像空間分割成第1區域A1及第2區域A2。
第1區域A1相對於模擬圖像SIMG為所向右側之區域(即,包含右眼RE之區域)。
第2區域A2相對於模擬圖像SIMG為所向左側之區域(即,包含左眼LE之區域)。
After step S1030, the client device 10 performs image space segmentation (S1031).
Specifically, as shown in FIG. 12, the processor 12 calculates the coordinates of the reference line of the image space (for example, a line connecting the middle point of the line between the eyes and the center of the nose) of the IRL.
The processor 12 divides the image space of the simulated image SIMG into a first area A1 and a second area A2 with the reference line IRL as a boundary line.
The first area A1 is a rightward area (that is, an area including the right eye RE) with respect to the simulated image SIMG.
The second area A2 is an area to the left with respect to the simulated image SIMG (that is, an area including the left eye LE).

於步驟S1031後,用戶端裝置10執行區域座標之特定(S1032)。
具體而言,處理器12將步驟S1030中記憶於記憶裝置11之計測座標中包含於第1區域A1之計測座標特定為第1區域座標C1,且將包含於第2區域A2之計測座標特定為第2區域座標C2。
After step S1031, the client device 10 performs area coordinate specification (S1032).
Specifically, the processor 12 specifies the measurement coordinates included in the first area A1 among the measurement coordinates stored in the memory device 11 in step S1030 as the first area coordinate C1, and specifies the measurement coordinates included in the second area A2 as Coordinate C2 of the second area.

於步驟S1032後,用戶端裝置10執行視線分佈率之計算(S1033)。
具體而言,處理器12使用式1計算使用者U將視線投向第1區域A1之比例(以下稱為「第1視線分佈率」)E1、與使用者U將視線投向第2區域A2之比例(以下稱為「第2視線分佈率」)E2。
[數1]
…(數1)
·E1…第1視線分佈率
·E2…第2視線分佈率
·N(C1)…第1區域座標C1之總數
·N(C2)…第2區域座標C2之總數
After step S1032, the client device 10 performs calculation of the line-of-sight distribution rate (S1033).
Specifically, the processor 12 uses Equation 1 to calculate the ratio of the user U ’s line of sight to the first area A1 (hereinafter referred to as the “first line of sight distribution rate”) E1 and the ratio of the user U ’s line of sight to the second area A2 (Hereinafter referred to as the "second sight distribution rate") E2.
[Number 1]
… (Number 1)
· E1 ... 1st line of sight distribution rate · E2 ... 2nd line of sight distribution rate · N (C1) ... Total number of coordinates C1 in the first area · N (C2) ... Total number of coordinates C2 in the second area

圖7之SS1(步驟S101~S103)相當於第1實施形態之化妝模擬處理。即,化妝模擬係就複數個化妝圖案之各者執行複數次。SS1 (steps S101 to S103) in FIG. 7 corresponds to the makeup simulation process of the first embodiment. That is, the makeup simulation is performed a plurality of times for each of the plurality of makeup patterns.

例如,於第2次之步驟S101中,處理器12將與第1次之步驟S101中使用之化妝圖案不同之化妝圖案應用於原始圖像,藉此產生與第1次之步驟S101中獲得之化妝模擬圖像不同的化妝模擬圖像。For example, in the second step S101, the processor 12 applies a makeup pattern different from the makeup pattern used in the first step S101 to the original image, thereby generating the same as that obtained in the first step S101. Makeup simulation images Different makeup simulation images.

於第2次之步驟S102中,處理器12將畫面P102(圖9)顯示於顯示器15。In the second step S102, the processor 12 displays the screen P102 (FIG. 9) on the display 15.

畫面P102包含圖像對象IMG102。
圖像對象IMG102為第2次之步驟S101中產生之化妝模擬圖像。
The picture P102 includes an image object IMG102.
The image object IMG102 is a makeup simulation image generated in the second step S101.

即,處理器12個別地提示複數張化妝模擬圖像,且計算對於各化妝模擬圖像之視線分佈。藉此,可獲得對於各化妝模擬圖像之視線分佈。That is, the processor 12 individually presents a plurality of makeup simulation images, and calculates a sight line distribution for each makeup simulation image. Thereby, the line-of-sight distribution for each makeup simulation image can be obtained.

於步驟S103後,用戶端裝置10執行視線參數之提示(S104)。
具體而言,處理器12將畫面P103(圖10)顯示於顯示器15。
After step S103, the client device 10 executes prompting of the sight parameters (S104).
Specifically, the processor 12 displays a screen P103 (FIG. 10) on the display 15.

畫面P103包含顯示對象A103a~A103b、及圖像對象IMG101與IMG102。
於顯示對象A103a,顯示有與產生圖像對象IMG101所用之化妝圖案相關之資訊(例如圖案名稱)、及對於圖像對象IMG101之第1視線分佈率E1與第2視線分佈率E2。
於顯示對象A103b,顯示有與產生圖像對象IMG102所用之化妝圖案相關之資訊(例如圖案名稱)、及對於圖像對象IMG102之第1視線分佈率E1與第2視線分佈率E2。
The screen P103 includes display objects A103a to A103b, and image objects IMG101 and IMG102.
On the display object A103a, information related to the makeup pattern used to generate the image object IMG101 (such as a pattern name), and the first line-of-sight distribution rate E1 and the second line-of-sight distribution rate E2 for the image object IMG101 are displayed.
On the display object A103b, information related to the makeup pattern used to generate the image object IMG102 (for example, the name of the pattern) and the first line-of-sight distribution rate E1 and the second line-of-sight distribution rate E2 for the image object IMG102 are displayed.

步驟S104後,用戶端裝置10執行建議資訊之決定(S105)。After step S104, the client device 10 executes the determination of the recommended information (S105).

於步驟S105之第1例中,處理器12根據步驟S103中計算出之對於複數張化妝模擬圖像之視線分佈率(即,顯示於畫面P103之顯示對象A103a~A103b之第1視線分佈率E1及第2視線分佈率E2),選擇第1視線分佈率E1最大之化妝模擬圖像。In the first example of step S105, the processor 12 calculates the line-of-sight distribution rate for the plurality of makeup simulation images calculated in step S103 (that is, the first line-of-sight distribution rate E1 of the display objects A103a to A103b displayed on the screen P103). And the second line-of-sight distribution rate E2), the makeup simulation image with the largest line-of-sight distribution rate E1 is selected.

於步驟S105之第2例中,處理器12根據步驟S103中計算出之對於複數張化妝模擬圖像之視線分佈率(即,顯示於畫面P103之顯示對象A103a~A103b之第1視線分佈率E1及第2視線分佈率E2),選擇第1視線分佈率為特定值以上之至少1張化妝模擬圖像。
另,於所有之化妝模擬圖像之第1視線分佈率E1皆未達特定值之情形時,不提示建議資訊。
In the second example of step S105, the processor 12 calculates the line-of-sight distribution rate for the plurality of makeup simulation images calculated in step S103 (that is, the first line-of-sight distribution rate E1 of the display objects A103a to A103b displayed on the screen P103). And the second line-of-sight distribution rate E2), at least one makeup simulation image is selected whose first line-of-sight distribution rate is a certain value or more.
In addition, when the first line-of-sight distribution rate E1 of all the makeup simulation images does not reach a specific value, no suggestion information is prompted.

於文獻“Brady, N., Campbell, M., & Flaherty, M. (2005). Perceptual asymmetries are preserved in memory for highly familiar faces of self and friend. Brain & Cognition, 58, 334-342.”中,記載有熟悉感體現於右半側臉部之研究成果。人日常觀察到之自身之臉部為映現於鏡上之臉部。因此,於人觀察映現於鏡上之自身之臉部之情形時,有於觀察者所向之右側感覺更像自己之傾向。
又,於文獻“Guo, K., Tunnicliffe, D., & Roebuck, H. (2010). Human spontaneousgaze patterns in viewing of faces of different species. Perception, 39, 533-542.”記載有人類或靈長類在注視人或動物時,有對觀察者所向之臉部之左半側加以觀察之傾向的研究成果。
In the document "Brady, N., Campbell, M., & Flaherty, M. (2005). Perceptual asymmetries are preserved in memory for highly familiar faces of self and friend. Brain & Cognition, 58, 334-342.", Records the research results of the familiarity reflected on the right half of the face. The face that one observes daily is the face that appears on the mirror. Therefore, when people observe the situation of their own face reflected on the mirror, they tend to feel more like themselves to the right of the observer.
Also, in the document "Guo, K., Tunnicliffe, D., & Roebuck, H. (2010). Human spontaneousgaze patterns in viewing of faces of different species. Perception, 39, 533-542." Describes humans or primates. This kind of research results have the tendency to observe the left half of the face to which the observer is looking when looking at people or animals.

第1例之第1視線分佈率E1最大之化妝模擬圖像意指複數張化妝模擬圖像中右眼側最受注視之化妝模擬圖像。換言之,第1視線分佈率E1最大之化妝模擬圖像亦可稱為使用者U之潛在評估最高之圖像(例如最關心之圖像、感覺最像自己之圖像、感覺最具親和性之圖像、或感覺最有印象之圖像)。The makeup simulation image with the largest first line-of-sight distribution rate E1 in the first example means the makeup simulation image with the most attention on the right eye side among the plurality of makeup simulation images. In other words, the makeup simulation image with the largest first line-of-sight distribution rate E1 can also be referred to as the image with the highest potential evaluation by user U (e.g., the image that cares most, the image that feels most like itself, the image that feels most affinity Images, or images that feel most impressive).

第2例之第1視線分佈率E1為特定值以上之至少1張化妝模擬圖像意指複數張化妝模擬圖像中右眼側受到特定以上之關注之化妝模擬圖像。換言之,第1視線分佈率E1為特定值以上之化妝模擬圖像亦可稱為使用者U之潛在評估超過特定基準之圖像(例如,特定程度上關心度較高之圖像、特定程度上感覺像自己之圖像、特定程度上感覺具有親和性之圖像、或特定程度上感覺有印象之圖像)。At least one makeup simulation image in which the first line-of-sight distribution rate E1 of the second example is equal to or more than a specific value means a makeup simulation image in which the right eye side of the plurality of makeup simulation images receives more or more attention. In other words, a makeup simulation image in which the first line-of-sight distribution rate E1 is a certain value or more can also be referred to as an image where the potential evaluation of the user U exceeds a certain reference (for example, an image with a higher degree of interest to a certain degree, a certain degree to a certain degree) (Feel like your own image, a certain degree of affinity, or a certain degree of impression).

畫面P103之例中,由於圖像對象IMG101及IMG102中之圖像對象IMG101之第1視線分佈率E1最大,故將圖像對象IMG101所對應之化妝模擬圖像(即,應用圖案名稱「MC1」之化妝圖案之化妝模擬圖像)決定為建議資訊。In the example of the screen P103, since the first sight distribution ratio E1 of the image object IMG101 among the image objects IMG101 and IMG102 is the largest, the makeup simulation image corresponding to the image object IMG101 (that is, the application pattern name "MC1" (Makeup simulation images of makeup patterns) was decided as recommended information.

步驟S105後,用戶端裝置10執行建議資訊之提示(S106)。
具體而言,處理器12將畫面P104(圖10)顯示於顯示器15。
After step S105, the client device 10 executes the prompting of the recommended information (S106).
Specifically, the processor 12 displays a screen P104 (FIG. 10) on the display 15.

畫面P104包含圖像對象IMG101、顯示對象A104a~A104b、及操作對象B104a~B104b。
於顯示對象A104a顯示有用於產生化妝模擬圖像之化妝圖案之圖案名稱。
於顯示對象A104b顯示有與用於產生化妝模擬圖像之化妝圖案所對應之項目相關的項目資訊(例如項目之類別及項目ID)。
操作對象B104a為受理用以使用建議資訊(例如圖像對象IMG101、顯示對象A104a~A104b之內容)更新伺服器30上之資料庫之使用者指示的對象。
操作對象B104b為受理用以對銷售顯示於顯示對象A104b之項目的網站加以訪問之使用者指示的對象。對操作對象B104b分配該網站之URL(Uniform Resource Locator:通用資源定位器)。
The screen P104 includes an image object IMG101, display objects A104a to A104b, and operation objects B104a to B104b.
A display name of a makeup pattern for generating a makeup simulation image is displayed on the display object A104a.
The display object A104b displays item information (for example, the category and item ID) of the item corresponding to the makeup pattern used to generate the makeup simulation image.
The operation object B104a is an object that receives a user instruction for updating the database on the server 30 with the recommended information (for example, the contents of the image object IMG101 and the display objects A104a to A104b).
The operation object B104b is an object that receives a user instruction to access a website that sells items displayed on the display object A104b. The URL (Uniform Resource Locator) of the website is assigned to the operation object B104b.

步驟S106後,用戶端裝置10執行更新請求(S107)。
具體而言,處理器12將更新請求資料發送至伺服器30。
更新請求資料包含以下之資訊。
·使用者U之使用者ID
·步驟S100中決定之圖像資料
·步驟S101中供參照之化妝圖案ID
·與步驟S103之執行日相關之資訊
·步驟S1030中記憶於記憶裝置11之計測座標及計測時間
·步驟S1033中計算出之第1視線分佈率E1及第2視線分佈率E2
After step S106, the client device 10 executes an update request (S107).
Specifically, the processor 12 sends the update request data to the server 30.
The update request data contains the following information.
User ID of user U
Image data determined in step S100Makeup pattern ID for reference in step S101
· Information related to the execution day of step S103 · Measurement coordinates and measurement time stored in the memory device 11 in step S1030 · First sight line distribution rate E1 and second sight line distribution rate E2 calculated in step S1033

步驟S107後,伺服器30執行資料庫之更新(S300)。
具體而言,處理器32將新的記錄追加至與包含於更新請求資料之使用者ID建立關聯之模擬日誌資訊資料庫(圖6)。
於新的記錄之各欄位儲存有以下之資訊。
·於「模擬ID」欄位,儲存有新的模擬ID。
·於「模擬執行日」欄位,儲存有與更新請求資料所含之執行日相關之資訊。
·於「原始圖像」欄位,儲存有更新請求資料所含之圖像資料。
·於「圖案ID」欄位,儲存有更新請求資料所含之化妝圖案ID。
·於「計測座標」欄位,儲存有更新請求資料所含之計測座標。
·於「計測時間」欄位,儲存有更新請求資料所含之計測時間。
·於「R分佈」欄位,儲存有更新請求資料所含之第1視線分佈率E1。
·於「L分佈」欄位,儲存有更新請求資料所含之第2視線分佈率E2。
After step S107, the server 30 performs database update (S300).
Specifically, the processor 32 adds a new record to the simulation log information database associated with the user ID included in the update request data (FIG. 6).
The following information is stored in each field of the new record.
· In the "Simulation ID" field, a new simulation ID is stored.
· In the "Simulation execution day" field, information related to the execution day included in the update request data is stored.
· In the "original image" field, the image data contained in the update request data is stored.
· In the "Pattern ID" field, the makeup pattern ID contained in the update request data is stored.
· In the "Measurement Coordinates" column, the measurement coordinates contained in the update request data are stored.
· In the "Measurement time" field, the measurement time included in the update request data is stored.
· In the "R distribution" field, the first line-of-sight distribution rate E1 included in the update request data is stored.
· In the "L distribution" field, the second line-of-sight distribution rate E2 included in the update request data is stored.

根據第1實施形態,基於對使用者U提示之臉部圖像(例如化妝模擬圖像)之視線偏移,提示符合使用者U之喜好之建議資訊(例如複數張化妝模擬圖像中使用者U之潛在評估較高之化妝模擬圖像)。藉此,可提示使用者U對使用者U觀察之臉部圖像之潛在評估較高之資訊。According to the first embodiment, based on the deviation of the line of sight of the facial image (for example, makeup simulation image) presented by the user U, suggestion information that matches the preference of the user U (for example, the user in multiple makeup simulation images) U has a higher potential assessment of makeup simulation images). In this way, the user U can be prompted for information with a higher potential evaluation of the facial image observed by the user U.

(2)第2實施形態
對本發明之第2實施形態進行說明。第2實施形態係基於自身臉部之鏡像中之視線偏移而提示建議資訊的例。
(2) Second Embodiment A second embodiment of the present invention will be described. The second embodiment is an example in which advice information is presented based on a line of sight shift in the mirror image of the face of the user.

(2-1)用戶端裝置之構成
對第2實施形態之用戶端裝置10之構成進行說明。圖13係第2實施形態之用戶端裝置之外觀圖。
(2-1) Configuration of Client Device The configuration of the client device 10 according to the second embodiment will be described. FIG. 13 is an external view of a client device according to a second embodiment.

如圖13所示,用戶端裝置10進而具備半反射鏡17。半反射鏡17配置於顯示器15上。由於半反射鏡17反射外光,且透過自顯示部15發出之光,故使用者U可同時觀察鏡像MI與模擬圖像SIMG。As shown in FIG. 13, the client device 10 further includes a half mirror 17. The half mirror 17 is disposed on the display 15. Since the half mirror 17 reflects external light and transmits light emitted from the display portion 15, the user U can observe the mirror MI and the simulated image SIMG at the same time.

(2-2)資訊處理
對第2實施形態之資訊處理進行說明。圖14係第2實施形態之資訊處理之流程圖。圖15係圖14之視線參數之計算之詳細流程圖。圖16係圖14之視線參數之計算之說明圖。圖17係顯示圖14之資訊處理中顯示之畫面例之圖。
(2-2) Information processing The information processing of the second embodiment will be described. FIG. 14 is a flowchart of information processing in the second embodiment. FIG. 15 is a detailed flowchart of the calculation of the line-of-sight parameters of FIG. 14. FIG. 16 is an explanatory diagram of the calculation of the line-of-sight parameters of FIG. 14. FIG. 17 is a diagram showing an example of a screen displayed in the information processing of FIG. 14. FIG.

如圖14所示,用戶端裝置10執行視線參數之計算(S110)。As shown in FIG. 14, the client device 10 performs calculation of the sight line parameters (S110).

如圖15所示,用戶端裝置10於步驟S1030(圖11)後執行使用者圖像之拍攝(S1100)。
具體而言,相機16拍攝正在觀察映現於半反射鏡17之鏡像MI之使用者U之臉部圖像。
處理器12產生由相機16取得之圖像之圖像資料。
As shown in FIG. 15, the client device 10 executes photographing of the user image (S1100) after step S1030 (FIG. 11).
Specifically, the camera 16 captures an image of the face of the user U who is observing the mirror image MI reflected on the half mirror 17.
The processor 12 generates image data of an image obtained by the camera 16.

步驟S1100後,用戶端裝置10執行鏡像座標之計算(S1101)。
具體而言,處理器12藉由解析步驟S1100中產生之圖像資料之特徵量,而計算使用者U之臉部之各部(例如輪廓、眼睛、鼻子、及嘴巴)在圖像空間中之座標及尺寸。
處理器12基於臉部各部之座標及尺寸、與用戶端裝置10中之相機16之位置,計算半反射鏡17與使用者U間之距離。
處理器12基於計算出之距離、與圖像空間中之座標,計算鏡像MI之座標空間(以下稱為「鏡像空間」)中之臉部各部的座標(以下稱為「鏡像座標」)。
After step S1100, the client device 10 performs the calculation of the mirror coordinates (S1101).
Specifically, the processor 12 calculates the coordinates of each part of the face of the user U (for example, contour, eyes, nose, and mouth) in the image space by analyzing the feature amounts of the image data generated in step S1100 And dimensions.
The processor 12 calculates the distance between the half-mirror 17 and the user U based on the coordinates and size of each part of the face and the position of the camera 16 in the client device 10.
Based on the calculated distance and the coordinates in the image space, the processor 12 calculates the coordinates (hereinafter referred to as "mirror coordinates") of each part of the face in the coordinate space (hereinafter referred to as "mirror space") of the mirror MI.

步驟S1101後,用戶端裝置10執行鏡像空間之分割(S1102)。
具體而言,如圖16所示,處理器12基於步驟S1101中計算出之各部之鏡像座標,計算鏡像空間之基準線(例如將連結雙眼之線之中間點、與鼻子之中心連結的線)MRL之座標。
處理器12以基準線IRL為邊界線,將鏡像空間分割成第1區域A1及第2區域A2。
第1區域A1為鏡像MI中位於使用者U所向右側(即,包含右眼RE之區域)之區域。
第2區域A2為鏡像MI中位於使用者U所向左側(即,包含左眼LE之區域)之區域。
After step S1101, the client device 10 performs partitioning of the mirror space (S1102).
Specifically, as shown in FIG. 16, the processor 12 calculates a reference line of the mirrored space based on the mirror coordinates of each part calculated in step S1101 (for example, a line connecting the middle point of the line connecting the eyes and the center of the nose). ) The coordinates of MRL.
The processor 12 divides the mirror space into a first area A1 and a second area A2 using the reference line IRL as a boundary line.
The first area A1 is an area in the mirror MI that is located to the right of the user U (that is, an area including the right eye RE).
The second area A2 is an area located on the left side of the user U (that is, an area including the left eye LE) in the mirror MI.

於步驟S1102後,用戶端裝置10執行步驟S1032~S1033(圖11)。After step S1102, the client device 10 executes steps S1032 to S1033 (FIG. 11).

於步驟S110後,用戶端裝置10執行建議資訊之決定(S111)。
具體而言,處理器12將步驟S110中計算出之第1視線分佈率E1及第2視線分佈率E2應用於式2,藉此計算與使用者U對鏡像MI之評估相關之得分S。
[數2]
(數2)
·S…得分
·a·…第1視線分佈率之係數
·b…第2視線分佈率之係數
After step S110, the client device 10 executes the determination of the recommended information (S111).
Specifically, the processor 12 applies the first line-of-sight distribution rate E1 and the second line-of-sight distribution rate E2 calculated in step S110 to Equation 2, thereby calculating a score S related to the user U's evaluation of the mirror MI.
[Number 2]
(Number 2)
· S ... score · a ·· Coefficient of the first line of sight distribution rate · b ... Coefficient of the second line of sight distribution rate

步驟S111後,執行建議資訊之提示(S112)。
具體而言,處理器12將畫面P110(圖17A)顯示於顯示器15。
畫面P110包含顯示對象A110。
於顯示對象A110顯示有步驟S111中計算出之得分S。
After step S111, the suggestion information prompt is executed (S112).
Specifically, the processor 12 displays a screen P110 (FIG. 17A) on the display 15.
The screen P110 includes a display object A110.
The score S calculated in step S111 is displayed on the display object A110.

鏡像MI變化(例如,使用者U之臉部化妝之效果改變)時,步驟S110~S112之處理結果亦變化。
其結果,處理器12將畫面P111(圖17B)顯示於顯示器15。
畫面P111包含顯示對象A111。
於顯示對象A111顯示有步驟S111中計算出之得分S。該得分S與顯示於顯示對象A110之得分S不同。
When the mirror MI changes (for example, the effect of makeup on the face of the user U changes), the processing results of steps S110 to S112 also change.
As a result, the processor 12 displays a screen P111 (FIG. 17B) on the display 15.
The screen P111 includes a display object A111.
The display object A111 displays the score S calculated in step S111. This score S is different from the score S displayed on the display object A110.

根據第2實施形態,基於使用者U對臉部之鏡像MI之視線偏移,而提示符合使用者U之喜好之建議資訊(例如與使用者U對鏡像MI之評估相關之得分S)。藉此,可提示使用者對使用者觀察之臉部之鏡像MI之潛在評估。According to the second embodiment, based on the user's deviation of the line of sight of the mirror MI of the face of the user U, suggestion information (for example, the score S related to the evaluation of the mirror MI of the user U on the mirror MI) is presented. Thereby, the user can be prompted for a potential evaluation of the mirror MI of the face the user observes.

例如,於使用者U一面觀察映現於半反射鏡17之鏡像MI,一面使用複數個化妝項目進行化妝之情形,得分S表示使用者U對使用各化妝項目之化妝之潛在評估。即,得分S表示使用者U對每個化妝項目之反應。For example, while the user U observes the mirror image MI reflected on the half mirror 17 and uses a plurality of makeup items for makeup, the score S represents the potential evaluation of the user U on the makeup using each makeup item. That is, the score S indicates the response of the user U to each makeup item.

例如,於使用者U一面觀察映現於半反射鏡17之鏡像MI一面化妝之情形,得分S表示使用者對使用者U開始化妝至結束之期間之鏡像MI(例如化妝中之臉部)的潛在評估。即,得分S表示使用者U對使用者U開始化妝至結束之期間之化妝效果之反應的推移。For example, when the user U observes the makeup of the mirror MI that is reflected on the half mirror 17, the score S indicates the potential of the user to the mirror MI (such as the face in the makeup) of the user U from the start to the end of the makeup. Evaluation. That is, the score S represents the transition of the user U's response to the makeup effect of the user U from the start of makeup to the end.

圖13中,顯示了用戶端裝置10具備半反射鏡17之例,但第2實施形態並非限定於此者。可使半反射鏡17為完全反射光之鏡,且顯示器15配置於用戶端裝置10之外部。於該情形時,使用者U觀察映現於鏡之自身之臉部之鏡像。畫面P110~P111顯示於配置於用戶端裝置10之外部的顯示器。FIG. 13 shows an example in which the client device 10 includes a half mirror 17, but the second embodiment is not limited to this. The half-mirror 17 can be a mirror that completely reflects light, and the display 15 is disposed outside the user-side device 10. In this case, the user U observes a mirror image of his face reflected in the mirror. The screens P110 to P111 are displayed on a display arranged outside the client device 10.

(3)第3實施形態
對第3實施形態進行說明。第3實施形態為提示使用者對電腦空間中作為使用者之化身而行動的虛擬化身圖像(以下稱為「虛擬化身圖像」)之潛在評估的例。
(3) Third Embodiment A third embodiment will be described. The third embodiment is an example of prompting a user for potential evaluation of a virtual avatar image (hereinafter referred to as a "virtual avatar image") acting as an avatar of the user in a computer space.

對第3實施形態之資訊處理進行說明。圖18係第3實施形態之資訊處理之流程圖。圖19~圖20係顯示圖18之資訊處理中顯示之畫面例之圖。
圖18之資訊處理例如於將第3實施形態應用於電腦遊戲之情形時,於開始玩電腦遊戲之前或遊戲中執行。
Information processing in the third embodiment will be described. FIG. 18 is a flowchart of information processing in the third embodiment. 19 to 20 are diagrams showing examples of screens displayed in the information processing of FIG. 18.
The information processing in FIG. 18 is executed before or during a computer game when the third embodiment is applied to a computer game, for example.

如圖18所示,用戶端裝置10執行虛擬化身圖像之提示(S120)。
具體而言,處理器12將畫面P120(圖19)顯示於顯示器15。
As shown in FIG. 18, the client device 10 executes the prompt of the avatar image (S120).
Specifically, the processor 12 displays a screen P120 (FIG. 19) on the display 15.

畫面P120包含圖像對象IMG120、與訊息對象M120。
圖像對象IMG120為虛擬化身圖像。
訊息對象M120為用以將使用者U之視線引導至圖像對象IMG101之訊息。
The picture P120 includes an image object IMG120 and a message object M120.
The image object IMG120 is an avatar image.
The message object M120 is a message for guiding the sight of the user U to the image object IMG101.

步驟S120後,用戶端裝置10執行步驟S103(圖7)。After step S120, the client device 10 executes step S103 (FIG. 7).

圖18之SS3(步驟S120及S103)對複數張虛擬化身圖像之各者執行複數次。SS3 (steps S120 and S103) of FIG. 18 is performed a plurality of times on each of the plurality of avatar images.

例如,於第2次之步驟S120中,處理器12將畫面121(圖19)顯示於顯示器15。For example, in the second step S120, the processor 12 displays a screen 121 (FIG. 19) on the display 15.

畫面P121包含圖像對象IMG121、與訊息對象M120。
圖像對象IMG121為與第1次之步驟S120中提示之虛擬化身圖像不同之虛擬化身圖像。
The screen P121 includes an image object IMG121 and a message object M120.
The image object IMG121 is a avatar image different from the avatar image presented in the first step S120.

藉此,獲得對複數張虛擬化身圖像之視線分佈。Thereby, the line of sight distribution of the plurality of virtual avatar images is obtained.

步驟S103後,用戶端裝置10執行建議資訊之決定(S121)。
具體而言,處理器12將各虛擬化身圖像之第1視線分佈率E1及第2視線分佈率E2應用於式3,藉此計算各虛擬化身圖像之自我認同指數Si(建議資訊之一例)。自我認同指數Si係與使用者U對虛擬化身圖像感覺是「自己之化身」之程度相關的指標。
[數3]
(數3)
·Si…得分
·α…第1視線分佈率之係數
·β…第2視線分佈率之係數
After step S103, the client device 10 executes the determination of the recommended information (S121).
Specifically, the processor 12 applies the first line-of-sight distribution rate E1 and the second line-of-sight distribution rate E2 of each avatar image to Equation 3, thereby calculating the self-identification index Si (an example of recommended information) ). The self-identification index Si is an index related to the degree to which the user U feels that the avatar image is "an avatar of himself."
[Number 3]
(Number 3)
· Si… Score · α… Coefficient of the first line of sight distributionβ • Coefficient of the second line of sight distribution

步驟S121後,執行建議資訊之提示(S122)。
具體而言,處理器12將畫面P122(圖20)顯示於顯示器15。
After step S121, the prompt of the recommended information is executed (S122).
Specifically, the processor 12 displays a screen P122 (FIG. 20) on the display 15.

畫面P122包含顯示對象A122a~A122b、圖像對象IMG120~IMG121、及操作對象B122a~B122b。
於顯示對象A122a顯示有對應圖像對象IMG120之虛擬化身圖像之虛擬化身名稱、對圖像對象IMG120之第1視線分佈率E1及第2視線分佈率E2、及圖像對象IMG120之自我認同指數Si。
於顯示對象A122b顯示有對應圖像對象IMG121之虛擬化身圖像之虛擬化身名稱、對圖像對象IMG121之第1視線分佈率E1及第2視線分佈率E2、及圖像對象IMG121之自我認同指數Si。
The screen P122 includes display objects A122a to A122b, image objects IMG120 to IMG121, and operation objects B122a to B122b.
The avatar name corresponding to the avatar image corresponding to the image object IMG120, the first sight distribution ratio E1 and the second sight distribution ratio E2 to the image object IMG120, and the self-identification index of the image object IMG120 are displayed on the display object A122a. Si.
The avatar name corresponding to the avatar image corresponding to the image object IMG121, the first line-of-sight distribution rate E1 and the second line-of-sight distribution rate E2 to the image object IMG121, and the self-identification index of the image object IMG121 are displayed on the display object A122b. Si.

步驟S122後,用戶端裝置10執行更新請求(S123)。
具體而言,處理器12將畫面P123顯示於顯示器15。
After step S122, the client device 10 executes an update request (S123).
Specifically, the processor 12 displays a screen P123 on the display 15.

畫面P123包含顯示對象A123、操作對象B123a~B123b、及圖像對象IMG123。
於顯示對象A123顯示有與步驟S121中計算出之自我認同指數Si最高之虛擬化身圖像相關之資訊(虛擬化身名稱、第1視線分佈率E1、第2視線分佈率E2、及自我認同指數Si)。
圖像對象IMG123為自我認同指數Si最高之虛擬化身圖像。
操作對象B123a為受理用以確定用作圖像對象IMG123之虛擬化身圖像之使用者指示的對象。
操作對象B123b為受理用以拒絕用作圖像對象IMG123之虛擬化身圖像之使用者指示的對象。
The screen P123 includes a display object A123, operation objects B123a to B123b, and an image object IMG123.
The display object A123 displays information related to the avatar image with the highest self-identification index Si calculated in step S121 (the avatar name, the first sight distribution rate E1, the second sight distribution rate E2, and the self-identification index Si ).
The image object IMG123 is an avatar image with the highest self-identity index Si.
The operation object B123a is an object that receives a user's instruction to determine the avatar image used as the image object IMG123.
The operation object B123b is an object that receives a user instruction to reject the avatar image used as the image object IMG123.

當使用者操作操作對象B123a時,處理器12將更新請求資料發送至伺服器30。
更新請求資料包含以下之資訊。
·使用者ID
·分配給圖像對象IMG123之虛擬化身圖像之圖像資料
When the user operates the operation object B123a, the processor 12 sends the update request data to the server 30.
The update request data contains the following information.
User ID
Image data of the avatar image assigned to the image object IMG123

步驟S123後,伺服器30執行資料庫之更新(S300)。
具體而言,處理器32參照使用者資訊資料庫(圖4),特定與更新請求資料所含之使用者ID建立關聯的編碼。
處理器32將包含於更新請求資料之圖像資料儲存於特定之編碼之「使用者圖像」欄位。
藉此,使用者U可使用自我認同指數Si最高之虛擬化身圖像。
After step S123, the server 30 performs database update (S300).
Specifically, the processor 32 refers to the user information database (FIG. 4), and specifies a code associated with the user ID included in the update request data.
The processor 32 stores the image data included in the update request data in a specific encoded "user image" field.
Thereby, the user U can use the avatar image with the highest self-identification index Si.

根據第3實施形態,可提示使用者對虛擬化身圖像之潛在評估。藉此,使用者可容易地選擇自身之潛在評估最高之虛擬化身(例如最關心之虛擬化身、感覺最像自己(即,可視作自己)之虛擬化身、感覺最具親和性之虛擬化身、或感覺最有印象之虛擬化身)。According to the third embodiment, the user can be prompted for a potential evaluation of the avatar image. In this way, the user can easily select the avatar with the highest potential evaluation of himself (e.g., the avatar that cares most, the avatar that feels most like himself (i.e., can be regarded as himself), the avatar that feels most affinity, or The avatar that feels most impressive).

(4)第4實施形態
對第4實施形態進行說明。第4實施形態為基於視線之移動圖案,提示使用者對圖像之潛在評估之例。
(4) Fourth Embodiment A fourth embodiment will be described. The fourth embodiment is an example of a movement pattern based on a line of sight, which prompts a user for potential evaluation of an image.

(4-1)第4實施形態之概要
對第4實施形態之概要進行說明。圖23係第4實施形態之概要之說明圖。
(4-1) Outline of Fourth Embodiment The outline of the fourth embodiment will be described. Fig. 23 is an explanatory diagram of the outline of the fourth embodiment.

如圖23所示,第4實施形態中,與第1實施形態(圖3)同樣,提示包含人(例如使用者U)之臉部之模擬圖像SIMG。
接著,產生與觀察模擬圖像SIMG之期間之使用者U之視線之移動相關的眼跟蹤信號。
接著,基於眼跟蹤信號,計算構成模擬圖像SIMG之像素之座標空間(以下稱為「圖像空間」)中與使用者U之視線移動之圖案相關的視線圖案(「視線圖案」之一例)。
接著,基於視線圖案,提供與使用者U對模擬圖像SIMG之潛在評估相關之建議資訊。
As shown in FIG. 23, in the fourth embodiment, similar to the first embodiment (FIG. 3), a simulated image SIMG including a face of a person (for example, user U) is presented.
Next, an eye-tracking signal related to the movement of the line of sight of the user U during the observation of the simulated image SIMG is generated.
Next, based on the eye-tracking signal, a line-of-sight pattern (an example of a "line-of-sight pattern") related to a pattern in which the user U's line of sight moves is calculated in the coordinate space (hereinafter referred to as "image space") of pixels constituting the simulated image SIMG .
Then, based on the line-of-sight pattern, suggestion information related to the potential evaluation of the simulated image SIMG by the user U is provided.

如此,第4實施形態中,基於模擬圖像SIMG之圖像空間中之使用者U之視線圖案,提示反映出使用者U對模擬圖像SIMG之潛在評估的建議資訊。In this way, in the fourth embodiment, based on the sight pattern of the user U in the image space of the simulated image SIMG, the suggestion information reflecting the potential evaluation of the simulated image SIMG by the user U is presented.

(4-2)資訊處理
對第4實施形態之資訊處理進行說明。圖24係顯示第4實施形態之視線參數之計算處理之流程圖。圖25及圖26係圖24之視線參數之計算之說明圖。
(4-2) Information processing The information processing of the fourth embodiment will be described. FIG. 24 is a flowchart showing the calculation processing of the line-of-sight parameters in the fourth embodiment. 25 and 26 are explanatory diagrams of calculation of the line-of-sight parameters of FIG. 24.

第4實施形態之資訊處理與第1實施形態(圖7)同樣地執行。The information processing of the fourth embodiment is performed in the same manner as the first embodiment (FIG. 7).

如圖24所示,用戶端裝置10於步驟S1030後,執行視線參數之計算(S1034)。
具體而言,處理器12計算以下至少1者之視線參數來代替視線分佈。
·視線停留於特定大小之區域(以下稱為「停留區域」)之停留時間
·視線停留於停留區域之停留次數
·視線之移動範圍
·視線之移動順序
·視線之移動面積
As shown in FIG. 24, after the client device 10 performs the calculation of the sight line parameters after step S1030 (S1034).
Specifically, the processor 12 calculates the sight line parameters of at least one of the following instead of the sight line distribution.
· Dwelling time when the line of sight stays in a certain size area (hereinafter referred to as "staying area") · Number of times the line of sight stays in the staying area · Moving range of the sight line · The moving order of the sight line · The moving area of the sight line

作為第1例,處理器12使用式4計算停留時間Ts。
[數4]
(數4)
・Ts…停留時間
・n…包含於停留區域之計測座標之數量
・t…計測時間之間隔
如圖25A所示,使用者U對臉部之圖像FIMG之視線EM例如於停留區域1停留5秒後,於停留區域2停留10秒。於該情形時,停留區域1之停留時間Ts1為5秒,停留區域2之停留時間Ts2為10秒,總停留時間ΣTs為15秒。
處理器12將計測時間、與停留時間建立關聯,而記憶於記憶裝置11。
As a first example, the processor 12 calculates the dwell time Ts using Equation 4.
[Number 4]
(Number 4)
・ Ts ... dwell time ・ n ... the number of measurement coordinates included in the dwell area ・ t ... the interval of the measurement time is shown in FIG. 25A. The user U ’s sight EM of the image FIMG of the face stays in the dwell area 1 for example 5 After 10 seconds, stay in stop zone 2 for 10 seconds. In this case, the dwell time Ts1 of the dwell area 1 is 5 seconds, the dwell time Ts2 of the dwell area 2 is 10 seconds, and the total dwell time ΣTs is 15 seconds.
The processor 12 associates the measured time with the dwell time and stores the measured time in the memory device 11.

作為第2例,處理器12使用式5計算停留次數Ns。
[數5]
(數5)
・Ns…停留次數
・ns…特定時間內停留區域所含之次數
圖25A之例中,停留次數為2次。
處理器12將計測時間、與停留次數建立對應而記憶於記憶裝置11。
As a second example, the processor 12 calculates the number of stays Ns using Equation 5.
[Number 5]
(Number 5)
・ Ns ... Number of stays ・ ns ... Number of times included in the stay area within a specific time In the example of FIG. 25A, the number of stays is two.
The processor 12 stores the measurement time and the number of stays in association with each other and memorizes them in the storage device 11.

作為第3例,處理器12將由位於計測座標端點之X座標及Y座標(以下稱為「端點座標」)定義之矩形區域作為移動範圍計算。
圖25B之例中,移動範圍為由端點座標{(X1,Y1)、(X1,Y2)、(X2,Y1)、(X2,Y2)}定義之矩形區域Z。
處理器12將規定矩形區域Z之端點座標記憶於記憶裝置11。
As a third example, the processor 12 calculates a rectangular area defined by the X coordinate and the Y coordinate (hereinafter referred to as "end point coordinates") located at the end points of the measurement coordinates as the moving range.
In the example of FIG. 25B, the movement range is a rectangular area Z defined by the endpoint coordinates {(X1, Y1), (X1, Y2), (X2, Y1), (X2, Y2)}.
The processor 12 stores the coordinates of the endpoints of the predetermined rectangular area Z in the storage device 11.

作為第4例,處理器12根據以下之方法計算移動順序。
處理器12將圖像FIMG之像素區域分割成每個臉部部位(例如,左眼、右眼、鼻子、及嘴巴)之像素區域(例如左眼像素區域、右眼像素區域、鼻子像素區域、及嘴巴像素區域)。
處理器12計算視線EM通過各像素區域之順序(以下稱為「移動順序」)。
圖25C之例中,移動順序為{右眼像素區域、鼻子像素區域、嘴巴像素區域、及左眼像素區域}。
處理器12將移動順序記憶於記憶裝置11。
As a fourth example, the processor 12 calculates the movement order according to the following method.
The processor 12 divides the pixel area of the image FIMG into pixel areas (e.g., left eye, right eye, nose, and mouth) pixel areas (e.g., left eye, right eye, nose, and mouth), And mouth pixel area).
The processor 12 calculates the order in which the line of sight EM passes through each pixel region (hereinafter referred to as "moving order").
In the example of FIG. 25C, the movement order is {the right-eye pixel area, the nose-pixel area, the mouth-pixel area, and the left-eye pixel area}.
The processor 12 stores the movement sequence in the storage device 11.

作為第5例,處理器12基於以下之任一者計算移動面積。
·XY空間中之計測座標之數量(然而,重複之計測座標計數為1個座標)(圖26A)。
·於將圖像FIMG之像素區域分割成以特定面積劃分之區域SEC之情形時,包含計測座標之區域SEC之數量之合計(圖26B)
處理器12將移動面積記憶於記憶裝置11。
As a fifth example, the processor 12 calculates a moving area based on any of the following.
The number of measurement coordinates in the XY space (however, repeated measurement coordinates are counted as 1 coordinate) (FIG. 26A).
When the pixel area of the image FIMG is divided into areas SEC divided by a specific area, the total number of areas SEC including measurement coordinates is included (Figure 26B)
The processor 12 stores the moving area in the storage device 11.

於記憶裝置11記憶有視線圖案評估模型。視線圖案評估模型係輸入視線圖案,且輸出對臉部之評估之模型。A sight pattern evaluation model is stored in the memory device 11. The line-of-sight pattern evaluation model is a model that inputs a line-of-sight pattern and outputs an evaluation of the face.

步驟S103後,用戶端裝置10之處理器12於步驟S105中參照記憶於記憶裝置11之視線圖案評估模型,計算與步驟S103中計算出之視線圖案對應之評估。After step S103, the processor 12 of the client device 10 refers to the sight line pattern evaluation model stored in the memory device 11 in step S105, and calculates an evaluation corresponding to the sight line pattern calculated in step S103.

根據第4實施形態,於使用視線圖案之情形時,亦可提示使用者U對圖像FIMG之潛在評估。According to the fourth embodiment, when the line-of-sight pattern is used, the user U can also be prompted to evaluate the potential of the image FIMG.

第1例中,於使用者U一面觀察圖像FIMG,一面使用任意之化妝項目進行化妝之情形時,進行化妝之同時,計算停留時間。
作為一例,於計測時間較早時(例如自計測開始至經過整個計測時間之特定時間(作為一例為整個計測時間之1/10之時間)為止之期間)且停留時間較長之情形時,意指使用者U於開始化妝後馬上對化妝之效果感覺到違和感。於該情形時,處理器12判定為使用者U對化妝項目之評估較低,且提示基於該評估之建議資訊(例如使用者U對化妝之效果感覺到違和感之意旨的訊息)。
作為另一例,於計測時間較晚時(例如自計測結束至回溯整個計測時間之特定時間(作為一例為整個計測時間之1/10之時間)之期間)且停留時間較長之情形時,意指使用者U未於化妝中感覺到違和感。於該情形時,處理器12判定為使用者U對化妝項目之評估較高,且提示基於該評估之建議資訊(例如使用者U滿意化妝之效果之意旨之訊息)。
In the first example, when the user U observes the image FIMG and performs makeup using an arbitrary makeup item, the makeup time is calculated while the makeup is performed.
As an example, when the measurement time is earlier (for example, the period from the start of the measurement to the elapse of a specific time of the entire measurement time (as an example, the time of 1/10 of the entire measurement time)) and the residence time is long, Means that the user U immediately feels against the effect of makeup immediately after starting makeup. In this case, the processor 12 determines that the evaluation of the makeup item by the user U is low, and prompts suggestion information based on the evaluation (for example, the message that the user U feels against the effect of the makeup effect).
As another example, when the measurement time is late (for example, the period from the end of the measurement to a specific time back to the entire measurement time (as an example, a time which is 1/10 of the entire measurement time) and the residence time is longer, It means that the user U does not feel a sense of discomfort in the makeup. In this case, the processor 12 determines that the evaluation of the makeup item by the user U is high, and prompts suggestion information based on the evaluation (for example, the message that the user U is satisfied with the effect of the makeup).

第2例中,於使用者U一面觀察圖像FIMG一面使用任意之化妝項目進行化妝之情形時,進行化妝之同時,計算停留次數。
作為一例,於計測時間較早時且停留次數較多之情形時,意指使用者U於開始化妝後,馬上對化妝之效果感覺到違和感。於該情形時,處理器12判定為使用者U對化妝項目之評估較低,且提示基於該評估之建議資訊(例如使用者U對化妝之效果覺出違和感之意旨的訊息)。
作為另一例,計測時間較慢時且停留次數較多之情形時,意指使用者U在化妝結束前對化妝效果覺出違和感,但於化妝結束後未對化妝效果覺出違和感。於該情形時,處理器12判定為使用者U對化妝項目之評估較高,且提示基於該評估之建議資訊(例如使用者U滿意化妝之效果之意旨之訊息)。
In the second example, when the user U observes the image FIMG while applying makeup using an arbitrary makeup item, the makeup is performed and the number of stays is calculated.
As an example, when the measurement time is early and the number of stays is large, it means that the user U immediately feels against the effect of makeup immediately after starting makeup. In this case, the processor 12 determines that the evaluation of the makeup item by the user U is low, and prompts suggestion information based on the evaluation (for example, the message that the user U feels contrary to the effect of the makeup).
As another example, when the measurement time is slow and the number of stays is large, it means that the user U feels a sense of disagreement on the makeup effect before the end of makeup, but does not feel a sense of disagreement on the makeup effect after the end of makeup. In this case, the processor 12 determines that the evaluation of the makeup item by the user U is high, and prompts suggestion information based on the evaluation (for example, the message that the user U is satisfied with the effect of the makeup).

第3例中,於使用者U一面觀察圖像FIMG一面使用任意之化妝項目進行化妝之情形時,進行化妝同時,計算移動範圍。
已知化妝評估專家會觀察整個臉部。又,已知人在觀察其他人之臉部時會觀察全體,相對於此,觀察自己之臉部時傾向於觀察局部。即,觀察整個臉部意指客觀地觀察自己之臉部。處理器12藉由計算移動範圍,將觀察者自身之評估與他人之評估之一致度提示為建議資訊。
In the third example, when the user U observes the image FIMG while applying makeup using an arbitrary makeup item, the makeup is performed and the moving range is calculated.
Makeup evaluation experts are known to look at the entire face. In addition, it is known that when people observe the faces of other people, they observe the entire body, whereas, when they observe their own faces, they tend to observe parts. That is, observing the entire face means objectively observing one's own face. The processor 12 indicates the consistency between the observer's own assessment and the assessment of others as recommended information by calculating the movement range.

第4例中,例如,於化妝結束後,為了對化妝效果進行評估,而計算使用者U觀察圖像FIMG時之視線EM之移動順序。
一般而言,移動順序表示使用者U對實施化妝後之部位關注之順序。該順序對使用者U而言意指化妝特性之優先級。處理器12基於移動順序特定對使用者U而言優先級較高之特性,藉此將適合優先級較高之特性之化妝項目提示為建議資訊。
In the fourth example, for example, after the makeup is finished, in order to evaluate the makeup effect, the moving order of the line of sight EM when the user U observes the image FIMG is calculated.
In general, the movement order indicates the order in which the user U pays attention to the parts after the makeup is performed. This order means the priority of the makeup characteristics for the user U. The processor 12 specifies a characteristic having a higher priority for the user U based on the movement sequence, thereby prompting makeup items suitable for the characteristic having a higher priority as recommended information.

第5例中,於使用者U一面觀察圖像FIMG一面使用任意之化妝項目進行化妝之情形時,進行化妝同時,計算移動面積。
移動面積與第3例之移動範圍同樣地表示自身評估與他人評估之一致度。處理器12藉由計算移動面積,而將自身評估與他人之評估之一致度提示為建議資訊。
In the fifth example, when the user U is applying makeup using an arbitrary makeup item while observing the image FIMG, the makeup is performed and the moving area is calculated.
The moving area is the same as the moving range of the third example, indicating the degree of consistency between the self-assessment and the others' assessment. The processor 12 calculates the moving area, and prompts the consistency between the self-assessment and the assessment of others as suggested information.

第4實施形態亦可應用於第2實施形態。於將第4實施形態應用於第2實施形態之情形時,於使用視線分佈以外之視線參數之情形時,亦可提示使用者U對鏡像MI之潛在評估。The fourth embodiment is also applicable to the second embodiment. When the fourth embodiment is applied to the case of the second embodiment, the user U can also be prompted to evaluate the potential MI of the mirror MI when the sight parameters other than the sight distribution are used.

(5)變化例
對本實施形態之變化例進行說明。
(5) Modification Example A modification example of this embodiment will be described.

(5-1)變化例1
對變化例1進行說明。變化例1係於圖7之提示化妝模擬圖像(S102)中引導使用者U之視線之例。圖21係顯示變化例1之畫面例之圖。
變化例1亦可應用於第1實施形態~第3實施形態之任一者。
(5-1) Modification 1
A first modification will be described. Variation 1 is an example that guides the user's line of sight in the makeup simulation image (S102) of FIG. 7. FIG. 21 is a diagram showing a screen example of the first modification.
Modification 1 can also be applied to any of the first to third embodiments.

變化例1之步驟S102中,處理器12將畫面P101a(圖21)顯示於顯示器15。In step S102 of the first modification, the processor 12 displays a screen P101a (FIG. 21) on the display 15.

畫面P101a於顯示有圖像對象IMG101a之點上與畫面P101(圖8)不同。圖像對象IMG101a為用以引導使用者U之視線之對象。
圖像對象IMG101a顯示於圖像對象IMG101之第2區域A2。
The screen P101a is different from the screen P101 (FIG. 8) in that the image object IMG101a is displayed. The image object IMG101a is an object for guiding the sight of the user U.
The image object IMG101a is displayed in the second area A2 of the image object IMG101.

已知一般左側視野與全體處理關聯,右側視野與局部處理關聯。根據變化例1,由於將用以引導視線之圖像對象IMG101a顯示於第2區域A2,故於使用者U觀察自身之臉部時,全體處理成為優勢,可實現更客觀之自我評估。化妝之目的之一在於對他人形成印象。藉由提高自我評估之客觀性而減少自我評估與他人評估之差異。其結果,當藉由自身臉部之化妝而提高自我評估時,來自他人對該化妝之評估亦提高。It is known that the left field of view is generally associated with the entire process, and the right field of view is associated with the local process. According to the modification 1, since the image object IMG101a for guiding the sight is displayed in the second area A2, when the user U observes his face, the entire processing becomes an advantage, and a more objective self-assessment can be realized. One of the purposes of makeup is to make an impression on others. Reduce the difference between self-assessment and others' assessments by improving the objectivity of self-assessment. As a result, when self-evaluation is improved by the makeup of one's own face, the evaluation of the makeup from others also increases.

另,自我評估與他人評估之差異與不引導視線之情形相比,引導視線之情形者較小。因此,可使用數值顯示引導視線時之效果(提高客觀性而減少他人評估與自我評估之偏差程度)。例如,可使用移動範圍或移動面積計算自我評估與他人評估之一致度,且使用該計算結果顯示引導視線時之效果。In addition, the difference between self-assessment and other people's assessment is smaller than the case where the sight is not guided. Therefore, numerical results can be used to show the effect of guiding the sight (improving objectivity and reducing the degree of deviation between others' assessment and self-assessment). For example, the moving range or moving area can be used to calculate the consistency between self-assessment and other people's assessments, and the calculation results can be used to show the effect when guiding the eyes.

(5-2)變化例2
對變化例2進行說明。變化例2為提示使用者對1張圖像之潛在評估之例。圖22係顯示變化例2之資訊處理中顯示之畫面例之圖。
變化例2亦可應用於第1實施形態~第4實施形態之任一者。
(5-2) Modification 2
A modification 2 will be described. Variation 2 is an example that prompts the user for potential evaluation of an image. FIG. 22 is a diagram showing an example of a screen displayed in the information processing of Modification 2. FIG.
Modification 2 can also be applied to any of the first to fourth embodiments.

變化例2之資訊處理與第1實施形態之資訊處理(圖7)同樣。然而,省略步驟S104(圖7)。The information processing of the modification 2 is the same as that of the first embodiment (FIG. 7). However, step S104 (FIG. 7) is omitted.

步驟S103後,用戶端裝置10之處理器12於步驟S105中將步驟S103中計算出之視線分佈率(即,圖12之第1視線分佈率E1及第2視線分佈率E2)應用於式2,藉此計算與使用者U對模擬圖像SIMG之評估相關的得分S。After step S103, the processor 12 of the client device 10 applies the line-of-sight distribution rate calculated in step S103 (i.e., the first line-of-sight distribution rate E1 and the second line-of-sight distribution rate E2 in FIG. 12) in step S105. In order to calculate the score S related to the evaluation of the simulated image SIMG by the user U.

步驟S105後,步驟S106中處理器12代替畫面P104(圖10)而將畫面P130(圖22)顯示於顯示器15。After step S105, the processor 12 displays the screen P130 (FIG. 22) on the display 15 instead of the screen P104 (FIG. 10) in step S106.

畫面P130包含顯示對象A103a、A104b、及A130、與圖像對象IMG101。
於顯示對象A130顯示有步驟S105中計算出之得分。
The screen P130 includes display objects A103a, A104b, and A130, and an image object IMG101.
The score calculated in step S105 is displayed on the display object A130.

根據變化例2,將對1張化妝模擬圖像之得分S顯示於顯示器15。藉此,可提示使用者U對1種圖像之潛在評估。According to the modification 2, the score S for one makeup simulation image is displayed on the display 15. In this way, the user U can be prompted for a potential assessment of one image.

(5-3)變化例3
對變化例3進行說明。變化例3為對化妝模擬圖像及虛擬化身以外之臉部圖像應用本實施形態之例。
(5-3) Modification 3
A modification 3 will be described. Modification 3 is an example in which the present embodiment is applied to a makeup simulation image and a face image other than the avatar.

作為第1例,本實施形態亦可應用於應用任意髮型後之臉部圖像。
於該情形時,基於對提示給使用者U之臉部圖像(例如,應用任意髮型後之臉部圖像)之視線偏移,提示符合使用者U之喜好之建議資訊(例如複數張髮型模擬圖像中使用者U之潛在評估較高之髮型模擬圖像)。藉此,可提示使用者U對使用者U觀察之臉部圖像之潛在評估較高的資訊。
As a first example, this embodiment can also be applied to a facial image after an arbitrary hairstyle is applied.
In this case, based on the sight shift of the face image presented to the user U (for example, the face image after applying any hairstyle), suggestion information (for example, multiple hairstyles) that matches the preference of the user U is prompted (Simulation image of hairstyle with higher potential evaluation of user U in the simulation image). In this way, the user U can be prompted for information with a higher potential assessment of the facial image observed by the user U.

作為第2例,本實施形態亦可應用於實施任意手術後之臉部圖像。
於該情形時,基於對提示給使用者U之臉部圖像(例如,實施任意手術後之臉部圖像)之視線偏移,提示符合使用者U之喜好之建議資訊(例如複數張手術模擬圖像中使用者U之潛在評估較高之手術模擬圖像)。藉此,可提示使用者U對使用者U觀察之臉部圖像之潛在評估較高的資訊。
As a second example, this embodiment can also be applied to a facial image after performing any operation.
In this case, based on the sight shift of the face image presented to the user U (for example, the face image after performing any operation), suggestion information (for example, a plurality of operations) that matches the preference of the user U is presented Surgical simulation image with higher potential assessment of user U in the simulation image). In this way, the user U can be prompted for information with a higher potential assessment of the facial image observed by the user U.

(6)本實施形態之小結
對本實施形態加以小結。
(6) Summary of this embodiment This embodiment is summarized.

本實施形態之第1態樣係一種資訊處理裝置(例如,用戶端裝置10),其具備:對使用者提示包含臉部之圖像之機構(例如執行步驟S102之處理之處理器12);
取得與使用者之視線移動相關之視線資訊之機構(例如,執行步驟S1030之處理之處理器12);
基於視線資訊計算圖像中之使用者之視線偏移之機構(例如,執行步驟S1033之處理之處理器12);
基於視線偏移決定與圖像相關之建議資訊之機構(例如,執行步驟S105之處理之處理器12);及
提示建議資訊之機構(例如,執行步驟S106之處理之處理器12)。
The first aspect of this embodiment is an information processing device (for example, the client device 10), which includes: a mechanism for prompting the user with an image including a face (for example, the processor 12 that executes the processing of step S102);
A mechanism for obtaining the sight information related to the movement of the sight of the user (for example, the processor 12 executing the processing of step S1030);
A mechanism for calculating a user's line of sight offset in the image based on the line of sight information (for example, the processor 12 executing the processing of step S1033);
An organization that determines recommended information related to the image based on the line-of-sight shift (for example, the processor 12 that performs the processing of step S105); and an organization that prompts for the recommended information (for example, the processor 12 that performs the processing of step S106).

根據第1態樣,基於使用者U對臉部圖像之視線偏移,決定提示給使用者U之建議資訊。藉此,可提示使用者U對使用者U觀察之臉部之潛在評估。According to the first aspect, based on the user U's line-of-sight shift of the facial image, the decision information to be presented to the user U is determined. In this way, the user U can be prompted to evaluate the potential of the face that the user U observes.

本實施形態之第2態樣係一種資訊處理裝置,其中
計算之機構將圖像之圖像空間以特定之邊界線分割成第1區域及第2區域;且計算
第1區域中之第1視線分佈;及
第2區域中之第2視線分佈。
The second aspect of this embodiment is an information processing device in which the calculation mechanism divides the image space of the image into a first region and a second region with a specific boundary line; and calculates a first line of sight in the first region Distribution; and the second line-of-sight distribution in the second region.

根據第2態樣,基於構成圖像空間之2個區域(第1區域及第2區域)各者中之視線分佈,決定提示給使用者U之建議資訊。
藉此,可提示對應於視線分佈之偏移之建議資訊。
According to the second aspect, the advice information to be presented to the user U is determined based on the line-of-sight distribution in each of the two regions (the first region and the second region) constituting the image space.
In this way, suggestion information corresponding to the deviation of the line of sight distribution can be prompted.

本實施形態之第3態樣係一種資訊處理裝置,其中
提示之機構個別地提示互不相同之複數張圖像;
計算之機構就各圖像計算第1視線分佈及第2視線分佈;
決定之機構將複數張圖像之至少1者決定為建議資訊。
The third aspect of this embodiment is an information processing device, in which the presenting mechanism individually presents a plurality of images different from each other;
The calculation mechanism calculates the first line of sight distribution and the second line of sight distribution for each image;
The deciding agency decides at least one of the plurality of images as recommendation information.

根據第3態樣,就複數張圖像各者,基於構成圖像空間之2個區域(第1區域及第2區域)各者中之視線分佈,決定建議資訊。
藉此,可自複數張圖像中將至少1者提示為建議資訊。
According to the third aspect, advice information is determined for each of the plurality of images based on the line of sight distribution in each of the two regions (the first region and the second region) constituting the image space.
In this way, at least one of the plurality of images can be prompted as recommended information.

本實施形態之第4態樣係一種資訊處理裝置,其中
第1區域為相對於圖像之所向右側之區域;
第2區域為相對於圖像之所向左側之區域;
建議資訊為複數張圖像中第1視線分佈之比例為特定值以上之至少1張圖像。
The fourth aspect of this embodiment is an information processing device, wherein the first area is a rightward area relative to the image;
The second area is the area to the left relative to the image;
The recommended information is at least one image in which the ratio of the first line of sight distribution in the plurality of images is a certain value or more.

根據第4態樣,將相對於圖像之所向右側之區域之視線分佈之比例為特定值以上之圖像提示為建議資訊。藉此,可容易地特定使用者U之潛在評估較高之圖像。According to the fourth aspect, an image in which the ratio of the line-of-sight distribution with respect to the area to the right of the image is greater than a specific value is suggested as the recommended information. Thereby, it is possible to easily specify an image with a higher potential evaluation of the user U.

本實施形態之第5態樣係一種資訊處理裝置,其中
圖像包含對使用者之臉部實施互不相同之化妝後之複數張化妝模擬圖像;
第1區域為相對於各化妝模擬圖像之所向右側之區域;
第2區域為相對於各化妝模擬圖像之所向左側之區域;
建議資訊為複數張化妝模擬圖像中第1視線分佈之比例最多的化妝模擬圖像。
The fifth aspect of this embodiment is an information processing device, wherein the image includes a plurality of makeup simulation images after applying different makeup to the user's face;
The first region is a region to the right with respect to each makeup simulation image;
The second area is the area to the left relative to each makeup simulation image;
The recommended information is the makeup simulation image with the largest proportion of first sight distribution among the plurality of makeup simulation images.

根據第5態樣,自複數張化妝模擬圖像中,提示相對於各化妝模擬圖像之所向右側之區域所對之視線分佈之比例為特定以上之化妝模擬圖像。藉此,可容易地特定使用者之潛在評估較高之化妝模擬圖像。According to the fifth aspect, among the plurality of makeup simulation images, it is suggested that the ratio of the sight line distribution to the area to the right of each makeup simulation image is greater than or equal to a specific makeup simulation image. Thereby, it is possible to easily specify a makeup simulation image with a higher potential evaluation by the user.

本實施形態之第6態樣係一種資訊處理裝置,其中
圖像為對使用者之臉部實施化妝後之化妝模擬圖像;
第1區域為相對於化妝模擬圖像之所向右側之區域;
第2區域為相對於化妝模擬圖像之所向左側之區域;
建議資訊為對應於化妝模擬圖像中之第1視線分佈之比例的資訊。
The sixth aspect of this embodiment is an information processing device, wherein the image is a makeup simulation image after applying makeup to the user's face;
The first area is the area to the right relative to the makeup simulation image;
The second area is the area to the left relative to the makeup simulation image;
The recommended information is information corresponding to the proportion of the first line of sight distribution in the makeup simulation image.

根據第6態樣,提示與相對於化妝模擬圖像之所向右側之區域所對之視線分佈之比例對應的建議資訊(例如得分)。藉此,可知使用者U對化妝模擬圖像之潛在評估。According to the sixth aspect, suggestion information (for example, a score) corresponding to the proportion of the line-of-sight distribution with respect to the area to the right of the makeup simulation image is presented. Thereby, the potential evaluation of the makeup simulation image by the user U can be known.

本實施形態之第7態樣係一種資訊處理裝置,其中
圖像包含互不相同之複數張虛擬化身圖像;
第1區域為相對於各虛擬化身圖像之所向右側之區域;
第2區域為相對於各虛擬化身圖像之所向左側之區域;
建議資訊為複數張虛擬化身圖像中第1視線分佈之比例最多之虛擬化身圖像。
The seventh aspect of this embodiment is an information processing device, wherein the image includes a plurality of avatar images different from each other;
The first region is a region to the right relative to each avatar image;
The second area is the area to the left relative to each avatar image;
The recommended information is the avatar image with the largest proportion of first sight distribution among the plurality of avatar images.

根據第7態樣,自複數張虛擬化身圖像中,提示相對於各虛擬化身圖像之所向右側之區域所對之視線分佈之比例為特定以上的虛擬化身圖像。藉此,可容易地特定使用者U之潛在評估較高之虛擬化身圖像。According to the seventh aspect, among the plurality of virtual avatar images, it is suggested that the ratio of the line of sight distribution to the area to the right of each virtual avatar image is greater than or equal to the specific avatar image. Thereby, the avatar image with high potential evaluation of the user U can be easily specified.

本實施形態之第8態樣係一種資訊處理裝置,其中
圖像為虛擬化身圖像;
第1區域為相對於虛擬化身圖像之所向右側之區域;
第2區域為相對於虛擬化身圖像之所向左側之區域;
建議資訊為對應於虛擬化身圖像中之第1視線分佈之比例的資訊。
An eighth aspect of this embodiment is an information processing device, wherein the image is a virtual avatar image;
The first area is the area to the right relative to the avatar image;
The second area is the area to the left relative to the avatar image;
The recommended information is information corresponding to the proportion of the first line of sight distribution in the avatar image.

根據第8態樣,提示與相對於虛擬化身圖像之所向右側之區域所對之視線分佈之比例對應的建議資訊(例如得分)。藉此,可知使用者U對虛擬化身圖像之潛在評估。According to the eighth aspect, suggestion information (for example, a score) corresponding to the ratio of the line of sight distribution to the area to the right of the avatar image is presented. In this way, the potential evaluation of the avatar image by the user U can be known.

本實施形態之第9態樣係一種資訊處理裝置,其中
圖像包含對使用者之臉部應用互不相同之髮型後之複數張圖像、及對使用者之臉部實施互不相同手術後之圖像之至少1者。
The ninth aspect of this embodiment is an information processing device, wherein the image includes a plurality of images after applying different hairstyles to the user's face, and after performing different operations on the user's face At least one of the images.

根據第9態樣,提示基於顯示應用任意髮型後之臉部及手術後之臉部之至少1者之圖像的建議資訊。藉此,可知使用者U對該臉部之潛在評估。According to the ninth aspect, the suggestion information is displayed based on displaying at least one of a face after applying an arbitrary hairstyle and a face after surgery. Thereby, the potential evaluation of the face by the user U can be known.

本實施形態之第10態樣係一種資訊處理裝置(例如用戶端裝置10),其具備:
取得與使用者對使用者之臉部鏡像之視線移動相關之視線資訊的機構(例如,執行步驟S1030之處理之處理器12);
基於視線資訊,計算鏡像中之使用者之視線偏移的機構(例如,執行步驟S1033之處理之處理器12);
基於視線偏移,決定與鏡像相關之建議資訊之機構(例如,執行步驟S111之處理之處理器12);及
提示建議資訊之機構(例如,執行步驟S112之處理之處理器12)。
The tenth aspect of this embodiment is an information processing device (such as the client device 10), which includes:
A mechanism for obtaining the sight information related to the sight movement of the user's face mirroring of the user (for example, the processor 12 executing the processing of step S1030);
A mechanism for calculating a user's line-of-sight offset in the image based on the line-of-sight information (for example, the processor 12 executing the processing of step S1033);
On the basis of the line-of-sight shift, a mechanism for determining recommended information related to mirroring (for example, the processor 12 performing the processing of step S111); and a mechanism for prompting for the recommended information (for example, the processor 12 performing the processing of step S112).

根據第10態樣,基於使用者對臉部鏡像之視線偏移,決定建議資訊。藉此,可提示使用者U對映現於鏡之使用者U之臉部之潛在評估。According to the tenth aspect, the recommendation information is determined based on the user's line-of-sight shift of the face mirror. In this way, the user U can be prompted to evaluate the potential of the face of the user U who is reflected in the mirror.

本實施形態之第11態樣係一種資訊處理裝置,其中
計算之機構將鏡像之鏡像空間以特定之邊界線分割成第1區域及第2區域;且計算
第1區域中之第1視線分佈;及
第2區域中之第2視線分佈。
The eleventh aspect of this embodiment is an information processing device, in which the calculation mechanism divides the mirrored mirror space into a first region and a second region with a specific boundary line; and calculates a first line of sight distribution in the first region; And the second line of sight distribution in the second area.

根據第11態樣,基於構成鏡像空間之2個區域(第1區域及第2區域)各者中之視線分佈,決定提示給使用者之建議資訊。
藉此,可提示對應於視線分佈之偏移之建議資訊。
According to the eleventh aspect, based on the line-of-sight distribution in each of the two regions (the first region and the second region) constituting the mirror space, the recommended information to be presented to the user is determined.
In this way, suggestion information corresponding to the deviation of the line of sight distribution can be prompted.

本實施形態之第12態樣係一種資訊處理裝置,其中
第1區域包含右眼;
第2區域包含左眼;
建議資訊為對應於第1視線分佈之比例之資訊,且為與使用者對鏡像之評估相關之資訊。
The twelfth aspect of this embodiment is an information processing device, wherein the first area includes a right eye;
Area 2 contains the left eye;
The recommended information is information corresponding to the proportion of the first line of sight distribution, and is information related to the user's evaluation of the mirror image.

根據第12態樣,提示與相對於鏡像之所向右側之區域所對之視線分佈之比例對應的建議資訊(例如得分)。藉此,可知使用者U對鏡像(例如實施化妝後之自身之臉部鏡像)之潛在評估。According to the twelfth aspect, suggestion information (for example, a score) corresponding to the ratio of the line of sight distribution with respect to the area to the right of the mirror image is presented. In this way, it is possible to know the potential evaluation of the user U on the mirror image (such as the mirror image of his own face after the makeup is performed).

本實施形態之第13態樣係一種資訊處理裝置(例如用戶端裝置10),其具備:
提示使用者之臉部圖像之機構(例如,執行步驟S102之處理之處理器12);
取得與使用者之視線移動相關之視線資訊之機構(例如,執行步驟S1030之處理之處理器12);
基於視線資訊,計算與圖像中之使用者之視線移動圖案相關之視線圖案的機構(例如,執行步驟S1034之處理之處理器12);
基於計算出之視線圖案,決定應向使用者提示之建議資訊的機構;及
提示決定之建議資訊之機構(例如,執行步驟S106之處理之處理器12)。
The thirteenth aspect of this embodiment is an information processing device (such as the client device 10), which includes:
A mechanism for prompting the user's face image (for example, the processor 12 executing the processing of step S102);
A mechanism for obtaining the sight information related to the movement of the sight of the user (for example, the processor 12 executing the processing of step S1030);
A mechanism for calculating a line-of-sight pattern related to the line-of-sight movement pattern of the user in the image based on the line-of-sight information (for example, the processor 12 executing the process of step S1034);
Based on the calculated line-of-sight pattern, a mechanism for suggesting recommended information to be presented to the user is determined; and a mechanism for suggesting the recommended information is determined (for example, the processor 12 that performs the process of step S106).

根據第13態樣,基於使用者U對臉部圖像之視線圖案,決定提示給使用者之建議資訊。藉此,可提示使用者U對使用者U觀察之臉部之潛在評估。According to the thirteenth aspect, based on the line-of-sight pattern of the face image of the user U, the advice information to be presented to the user is determined. In this way, the user U can be prompted to evaluate the potential of the face that the user U observes.

本實施形態之第14態樣係一種資訊處理裝置,其中
計算之機構將使用者之視線停留於特定範圍之停留區域之時間即停留時間作為視線圖案計算。
The fourteenth aspect of this embodiment is an information processing device, in which the calculating mechanism calculates the time that the user's line of sight stays in a specific area of the staying area, that is, the staying time, as the line of sight pattern.

本實施形態之第15態樣係一種資訊處理裝置,其中
計算之機構將使用者之視線停留於特定範圍之停留區域之次數即停留次數作為視線圖案計算。
The fifteenth aspect of this embodiment is an information processing device, in which the calculating mechanism calculates the number of stays of the user in the stay area in a specific range, that is, the stays number, as the sight pattern.

本實施形態之第16態樣係一種資訊處理裝置,其中
計算之機構將使用者之視線分佈之面積作為視線圖案計算。
The sixteenth aspect of this embodiment is an information processing device, in which the calculation mechanism calculates the area of the user's line of sight distribution as the line of sight pattern.

本實施形態之第17態樣係一種資訊處理裝置,其中
計算之機構將使用者之視線移動之順序作為視線圖案計算。
The seventeenth aspect of this embodiment is an information processing device, in which the calculation mechanism calculates the order in which the user's line of sight moves as the line of sight pattern.

本實施形態之第18態樣係一種資訊處理裝置,其中
計算之機構將使用者之視線移動之面積作為視線圖案計算。
The eighteenth aspect of this embodiment is an information processing device, in which the calculating mechanism calculates the area where the user's line of sight moves as the line of sight pattern.

本實施形態之第19態樣係一種用以使電腦(例如處理器12)作為上述任一項記載之各機構發揮功能之程式。The nineteenth aspect of this embodiment is a program for causing a computer (for example, the processor 12) to function as each of the mechanisms described in any of the above.

(7)其他之變化例(7) Other changes

記憶裝置11可經由網路NW與用戶端裝置10連接。記憶裝置31可經由網路NW與伺服器30連接。The memory device 11 can be connected to the client device 10 via the network NW. The memory device 31 can be connected to the server 30 via a network NW.

上述資訊處理之各步驟可以用戶端裝置10及伺服器30之任一者執行。Each step of the above information processing can be performed by any one of the client device 10 and the server 30.

上述實施形態中,作為模擬圖像SIMG之一例,例示了化妝模擬圖像。然而,本實施形態亦可於模擬圖像SIMG為以下之至少1者之情形時應用。
·治療(例如整形手術)後之臉部之模擬圖像
·包含染色後之頭髮之臉部之模擬圖像
In the above embodiment, the makeup simulation image is exemplified as an example of the simulation image SIMG. However, this embodiment can also be applied when the simulated image SIMG is at least one of the following.
Simulation images of the face after treatment (such as plastic surgery) Simulation images of the face including dyed hair

以上,對本發明之實施形態詳細地進行了說明,但本發明之範圍不限定於上述實施形態。又,上述實施形態可於不脫離本發明之主旨之範圍內進行各種改良或變更。又,上述實施形態及變化例可組合。As mentioned above, although embodiment of this invention was described in detail, the range of this invention is not limited to the said embodiment. Moreover, the said embodiment can be variously improved or changed in the range which does not deviate from the meaning of this invention. The above-mentioned embodiments and modifications can be combined.

1‧‧‧資訊處理系統1‧‧‧ Information Processing System

10‧‧‧用戶端裝置 10‧‧‧Client Device

11‧‧‧記憶裝置 11‧‧‧Memory device

12‧‧‧處理器 12‧‧‧ processor

13‧‧‧輸入輸出介面 13‧‧‧I / O interface

14‧‧‧通信介面 14‧‧‧ communication interface

15‧‧‧顯示器 15‧‧‧ Display

16‧‧‧相機 16‧‧‧ Camera

17‧‧‧半反射鏡 17‧‧‧ half mirror

20‧‧‧眼跟蹤器 20‧‧‧ Eye Tracker

30‧‧‧伺服器 30‧‧‧Server

31‧‧‧記憶裝置 31‧‧‧Memory device

32‧‧‧處理器 32‧‧‧ processor

33‧‧‧輸入輸出介面 33‧‧‧I / O interface

34‧‧‧通信介面 34‧‧‧ communication interface

A1‧‧‧第1區域 A1‧‧‧Area 1

A2‧‧‧第2區域 A2‧‧‧ Zone 2

A103a‧‧‧顯示對象 A103a‧‧‧Display object

A103b‧‧‧顯示對象 A103b‧‧‧ Display object

A104a‧‧‧顯示對象 A104a‧‧‧Display object

A104b‧‧‧顯示對象 A104b‧‧‧Display object

A110‧‧‧顯示對象 A110‧‧‧Display object

A111‧‧‧顯示對象 A111‧‧‧Display object

A122a‧‧‧顯示對象 A122a‧‧‧Display object

A122b‧‧‧顯示對象 A122b‧‧‧ Display object

A123a‧‧‧顯示對象 A123a‧‧‧Display object

A123b‧‧‧顯示對象 A123b‧‧‧ Display Object

A130‧‧‧顯示對象 A130‧‧‧Display object

B104a‧‧‧操作對象 B104a‧‧‧Operation object

B104b‧‧‧操作對象 B104b‧‧‧Operation object

B123a‧‧‧操作對象 B123a‧‧‧Operation object

B123b‧‧‧操作對象 B123b‧‧‧Operation object

E1‧‧‧第1視線分佈率 E1‧‧‧The first line of sight distribution rate

E2‧‧‧第2視線分佈率 E2‧‧‧Second line of sight distribution

EM‧‧‧視線 EM‧‧‧Sight

FIMG‧‧‧圖像 FIMG‧‧‧Image

IMG‧‧‧圖像對象 IMG‧‧‧Image Object

IMG100‧‧‧圖像對象 IMG100‧‧‧Image Object

IMG101‧‧‧圖像對象 IMG101‧‧‧Image Object

IMG101a‧‧‧圖像對象 IMG101a‧‧‧Image Object

IMG102‧‧‧圖像對象 IMG102‧‧‧Image Object

IMG120‧‧‧圖像對象 IMG120‧‧‧Image Object

IMG121‧‧‧圖像對象 IMG121‧‧‧Image Object

IMG123‧‧‧圖像對象 IMG123‧‧‧Image Object

IRL‧‧‧基準線 IRL‧‧‧Baseline

LE‧‧‧左眼 LE‧‧‧Left Eye

M101‧‧‧訊息對象 M101‧‧‧Message Object

M120‧‧‧訊息對象 M120‧‧‧Message Object

MI‧‧‧鏡像 MI‧‧‧Mirror

MRL‧‧‧基準線 MRL‧‧‧Baseline

NW‧‧‧網路 NW‧‧‧Internet

P100~P104‧‧‧畫面 P100 ~ P104‧‧‧Screen

P101a‧‧‧畫面 P101a‧‧‧screen

P110‧‧‧畫面 P110‧‧‧screen

P111‧‧‧畫面 P111‧‧‧screen

P120~P123‧‧‧畫面 P120 ~ P123‧‧‧Screen

P130‧‧‧畫面 P130‧‧‧screen

RE‧‧‧右眼 RE‧‧‧ Right Eye

SEC‧‧‧區域 SEC‧‧‧Area

SIMG‧‧‧模擬圖像 SIMG‧‧‧Simulation image

S100~S107‧‧‧步驟 S100 ~ S107‧‧‧step

S110~S112‧‧‧步驟 S110 ~ S112‧‧‧step

S120~S123‧‧‧步驟 S120 ~ S123‧‧‧step

S300‧‧‧步驟 S300‧‧‧step

S1030~S1034‧‧‧步驟 S1030 ~ S1034‧‧‧step

S1100~S1102‧‧‧步驟 S1100 ~ S1102‧‧‧step

SS1‧‧‧化妝模擬處理 SS1‧‧‧makeup simulation

SS3‧‧‧步驟 SS3‧‧‧step

U‧‧‧使用者 U‧‧‧User

(X1、Y2)‧‧‧端點座標 (X1, Y2) ‧‧‧ endpoint coordinates

(X1、Y1)‧‧‧端點座標 (X1, Y1) ‧‧‧ endpoint coordinates

(X2、Y1)‧‧‧端點座標 (X2, Y1) ‧‧‧ endpoint coordinates

(X2、Y2)‧‧‧端點座標 (X2, Y2) ‧‧‧ endpoint coordinates

X‧‧‧座標 X‧‧‧ coordinates

Y‧‧‧座標 Y‧‧‧coordinates

Z‧‧‧矩形區域 Z‧‧‧ rectangular area

圖1係顯示第1實施形態之資訊處理系統之構成之方塊圖。FIG. 1 is a block diagram showing the configuration of the information processing system of the first embodiment.

圖2係顯示第1實施形態之資訊處理系統之構成之方塊圖。 Fig. 2 is a block diagram showing the structure of the information processing system of the first embodiment.

圖3係第1實施形態之概要之說明圖。 FIG. 3 is an explanatory diagram of the outline of the first embodiment.

圖4係顯示第1實施形態之使用者資訊資料庫之資料構造的圖。 FIG. 4 is a diagram showing a data structure of a user information database in the first embodiment.

圖5係顯示第1實施形態之化妝圖案資訊主表之資料構造的圖。 FIG. 5 is a diagram showing a data structure of a makeup pattern information master table according to the first embodiment.

圖6係顯示第1實施形態之模擬日誌資訊資料庫之資料構造的圖。 FIG. 6 is a diagram showing a data structure of the analog log information database of the first embodiment.

圖7係第1實施形態之資訊處理之流程圖。 FIG. 7 is a flowchart of information processing in the first embodiment.

圖8係顯示圖7之資訊處理中顯示之畫面例之圖。 FIG. 8 is a diagram showing an example of a screen displayed in the information processing of FIG. 7. FIG.

圖9係顯示圖7之資訊處理中顯示之畫面例之圖。 FIG. 9 is a diagram showing an example of a screen displayed in the information processing of FIG. 7. FIG.

圖10係顯示圖7之資訊處理中顯示之畫面例之圖。 FIG. 10 is a diagram showing an example of a screen displayed in the information processing of FIG. 7. FIG.

圖11係圖7之視線分佈計算之詳細之流程圖。 FIG. 11 is a detailed flowchart of the line-of-sight distribution calculation of FIG. 7.

圖12係圖7之視線分佈計算之說明圖。 FIG. 12 is an explanatory diagram of the calculation of the line-of-sight distribution of FIG. 7. FIG.

圖13係第2實施形態之用戶端裝置之外觀圖。 FIG. 13 is an external view of a client device according to a second embodiment.

圖14係第2實施形態之資訊處理之流程圖。 FIG. 14 is a flowchart of information processing in the second embodiment.

圖15係圖14之視線分佈計算之詳細流程圖。 FIG. 15 is a detailed flowchart of the line-of-sight distribution calculation of FIG. 14.

圖16係圖14之視線分佈計算之說明圖。 FIG. 16 is an explanatory diagram of the calculation of the line-of-sight distribution of FIG. 14. FIG.

圖17A、B係顯示圖14之資訊處理中顯示之畫面例之圖。 17A and 17B are diagrams showing examples of screens displayed in the information processing of FIG. 14.

圖18係第3實施形態之資訊處理之流程圖。 FIG. 18 is a flowchart of information processing in the third embodiment.

圖19係顯示圖18之資訊處理中顯示之畫面例之圖。 FIG. 19 is a diagram showing an example of a screen displayed in the information processing of FIG. 18. FIG.

圖20係顯示圖18之資訊處理中顯示之畫面例之圖。 FIG. 20 is a diagram showing an example of a screen displayed in the information processing of FIG. 18. FIG.

圖21係顯示變化例1之畫面例之圖。 FIG. 21 is a diagram showing a screen example of the first modification.

圖22係顯示變化例2之資訊處理中顯示之畫面例之圖。 FIG. 22 is a diagram showing an example of a screen displayed in the information processing of Modification 2. FIG.

圖23係第4實施形態之概要之說明圖。 Fig. 23 is an explanatory diagram of the outline of the fourth embodiment.

圖24係第4實施形態之視線參數之計算處理之流程圖。 FIG. 24 is a flowchart of calculation processing of the line-of-sight parameters in the fourth embodiment.

圖25A~C係圖24之視線參數計算之說明圖。 25A to 25C are explanatory diagrams for calculating the line-of-sight parameters of FIG.

圖26A、B係圖24之視線參數計算之說明圖。 26A and 26B are diagrams for explaining the calculation of the line-of-sight parameters of FIG. 24.

Claims (19)

一種資訊處理裝置,其包含: 對使用者提示包含臉部之圖像之機構; 取得與上述使用者之視線移動相關之視線資訊之機構; 基於上述視線資訊,計算上述圖像中之上述使用者之視線偏移之機構; 基於上述視線偏移,決定與上述圖像相關之建議資訊之機構;及 提示上述建議資訊之機構。An information processing device including: A mechanism that prompts the user with images of faces; An organization that obtains sight information related to the sight movement of the above users; A mechanism for calculating the sight shift of the user in the image based on the sight information; An agency that determines recommended information related to the above-mentioned images based on the above-mentioned line of sight shift; and Agencies that suggest the above information. 如請求項1之資訊處理裝置,其中 上述計算之機構將上述圖像之圖像空間,以特定之邊界線分割成第1區域及第2區域;且計算 上述第1區域中之第1視線分佈;及 上述第2區域中之第2視線分佈。If the information processing device of claim 1, wherein The above calculation mechanism divides the image space of the above image into a first region and a second region with a specific boundary line; and calculates The first line of sight distribution in the above first area; and The second line of sight distribution in the second area. 如請求項2之資訊處理裝置,其中 上述提示之機構個別地提示互不相同之複數張圖像; 上述計算之機構就各圖像計算上述第1視線分佈及上述第2視線分佈; 上述決定之機構,將上述複數張圖像之至少1者決定為上述建議資訊。If the information processing device of claim 2, wherein The above-mentioned prompting mechanism individually presents a plurality of images different from each other; The above calculation mechanism calculates the first line-of-sight distribution and the second line-of-sight distribution for each image; The decision making mechanism determines at least one of the plurality of images as the above-mentioned recommended information. 如請求項3之資訊處理裝置,其中 上述第1區域為係面向上述圖像為右側之區域; 上述第2區域為係面向上述圖像為左側之區域; 上述建議資訊係上述複數張圖像中,上述第1視線分佈之比例為特定值以上之至少1張圖像。Such as the information processing device of item 3, wherein The first area is an area facing the right side of the image; The second area is an area facing the left side of the image; The above-mentioned advice information is at least one image of the plurality of images in which the ratio of the first line of sight distribution is a certain value or more. 如請求項3或4之資訊處理裝置,其中 上述圖像包含對上述使用者之臉部實施互不相同之化妝後之複數張化妝模擬圖像; 上述第1區域係面向各化妝模擬圖像為右側之區域; 上述第2區域係面向各化妝模擬圖像為左側之區域; 上述建議資訊係上述複數張化妝模擬圖像中,上述第1視線分佈之比例最多的化妝模擬圖像。If the information processing device of claim 3 or 4, wherein The above image includes a plurality of makeup simulation images after applying different makeup to the face of the user; The first region is a region on the right side facing each makeup simulation image; The second area is an area to the left of each makeup simulation image; The above advice information is the makeup simulation image in which the proportion of the first line of sight distribution is the largest among the plurality of makeup simulation images. 如請求項3或4之資訊處理裝置,其中 上述圖像包含對上述使用者之臉部實施化妝後之複數張化妝模擬圖像; 上述第1區域係面向上述各化妝模擬圖像為右側之區域; 上述第2區域係面向上述各化妝模擬圖像為左側之區域; 上述建議資訊係對應於上述化妝模擬圖像中之上述第1視線分佈之比例的資訊。If the information processing device of claim 3 or 4, wherein The image includes a plurality of makeup simulation images after the user's face is subjected to makeup; The first area is an area facing the right side of each of the makeup simulation images; The second area is an area facing the left side of each of the makeup simulation images; The above-mentioned recommended information is information corresponding to a ratio of the first line-of-sight distribution in the makeup simulation image. 如請求項3或4之資訊處理裝置,其中 上述提示之機構個別地提示互不相同之複數張虛擬化身圖像; 上述第1區域係面向各虛擬化身圖像為右側之區域; 上述第2區域係面向各虛擬化身圖像為左側之區域; 上述建議資訊係上述複數張虛擬化身圖像中,上述第1視線分佈之比例最多之虛擬化身圖像。If the information processing device of claim 3 or 4, wherein The above-mentioned prompting mechanism individually prompts a plurality of avatar images different from each other; The first area is the area on the right facing each avatar image; The second area is the area on the left facing each avatar image; The above recommended information is the avatar image with the largest proportion of the first line of sight among the plurality of avatar images. 如請求項3或4之資訊處理裝置,其中 上述圖像為虛擬化身圖像; 上述第1區域位於面向上述虛擬化身圖像為右側; 上述第2區域係位於面向上述虛擬化身圖像為左側; 上述建議資訊係對應於上述虛擬化身圖像中之上述第1視線分佈之比例的資訊。If the information processing device of claim 3 or 4, wherein The above image is a virtual avatar image; The first area is located to the right of the avatar image; The second area is located on the left side facing the virtual avatar image; The recommended information is information corresponding to a ratio of the first line of sight distribution in the avatar image. 如請求項3至8中任一項之資訊處理裝置,其中 上述圖像包含對上述使用者之臉部應用互不相同之髮型後之複數張圖像、及對上述使用者之臉部實施互不相同之手術後之圖像之至少1者。The information processing device of any one of claims 3 to 8, wherein The image includes at least one of a plurality of images after applying different hairstyles to the face of the user, and images after performing different operations on the face of the user. 一種資訊處理裝置,其包含: 取得與上述使用者之對使用者之臉部鏡像之視線移動相關之視線資訊的機構; 基於上述視線資訊,計算上述鏡像中之上述使用者之視線偏移的機構; 基於上述視線偏移,決定與上述鏡像相關之建議資訊之機構;及 提示上述建議資訊之機構。An information processing device including: A mechanism for obtaining gaze information related to the gaze movement of the user's face mirroring of the user; A mechanism for calculating the sight shift of the user in the mirror image based on the sight information; An agency that determines the recommended information related to the above-mentioned mirroring based on the above-mentioned line of sight shift; and Agencies that suggest the above information. 如請求項10之資訊處理裝置,其中 上述計算之機構將上述鏡像之鏡像空間,以特定之邊界線分割成第1區域及第2區域;且計算 上述第1區域中之第1視線分佈;及 上述第2區域中之第2視線分佈。The information processing device of claim 10, wherein The above calculation mechanism divides the above-mentioned mirror image space into a first region and a second region with a specific boundary line; and calculates The first line of sight distribution in the above first area; and The second line of sight distribution in the second area. 如請求項11之資訊處理裝置,其中 上述第1區域位於面向上述鏡像為右側; 上述第2區域位於面向上述鏡像為左側; 上述建議資訊為對應於上述第1視線分佈之比例之資訊,且為與上述使用者對上述鏡像之評估相關之資訊。The information processing device as claimed in item 11, wherein The first area is located to the right facing the mirror image; The second area is located on the left side facing the mirror image; The above-mentioned recommended information is information corresponding to the proportion of the first line-of-sight distribution, and is information related to the evaluation of the image by the user. 一種資訊處理裝置,其包含: 提示使用者之臉部圖像之機構; 取得與上述使用者之視線移動相關之視線資訊之機構; 基於上述視線資訊,計算與上述圖像中之上述使用者之視線移動之圖案相關之視線圖案的機構; 基於上述計算出之視線圖案,決定應向上述使用者提示之建議資訊的機構;及 提示上述決定之建議資訊之機構。An information processing device including: A mechanism to prompt the user's face image; An organization that obtains sight information related to the sight movement of the above users; A mechanism for calculating a line of sight pattern related to the pattern of the line of sight movement of the user in the image based on the line of sight information; Based on the calculated line-of-sight pattern, determine the organization that should advise the above user of the recommended information; and Agencies suggesting recommended information for the above decision. 如請求項13之資訊處理裝置,其中 上述計算之機構,將上述使用者之視線停留於特定範圍之停留區域之時間即停留時間,作為上述視線圖案而計算。The information processing device of claim 13, wherein The calculation mechanism described above calculates the time that the user's line of sight stays in a specific range of staying area, that is, the staying time, as the line of sight pattern. 如請求項13或14之資訊處理裝置,其中 上述計算之機構將上述使用者之視線停留於特定範圍之停留區域之次數即停留次數,作為上述視線圖案而計算。If the information processing device of claim 13 or 14, wherein The above calculation mechanism calculates the number of times that the user's line of sight stays in a specific range of staying area, that is, the number of stays, as the line of sight pattern. 如請求項13至15中任一項之資訊處理裝置,其中 上述計算之機構,將上述使用者之視線分佈之面積,作為上述視線圖案而計算。The information processing device according to any one of claims 13 to 15, wherein The calculation means calculates the area of the line of sight distribution of the user as the line of sight pattern. 如請求項13至16中任一項之資訊處理裝置,其中 上述計算之機構,將上述使用者之視線移動之順序,作為上述視線圖案而計算。The information processing device according to any one of claims 13 to 16, wherein The calculation means calculates the order in which the line of sight of the user moves as the line of sight pattern. 如請求項13至17中任一項之資訊處理裝置,其中 上述計算之機構,將上述使用者之視線移動之面積,作為上述視線圖案計算。The information processing device of any one of claims 13 to 17, wherein The calculating mechanism calculates the area where the user's line of sight moves as the line of sight pattern. 一種程式,其用於使電腦作為如請求1至18中任一項之各機構而發揮功能。A program for causing a computer to function as a mechanism according to any one of claims 1 to 18.
TW108112650A 2018-04-27 2019-04-11 Information processing device and program TW201945898A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-086366 2018-04-27
JP2018086366A JP7253325B2 (en) 2018-04-27 2018-04-27 Information processing device, program, and information processing method

Publications (1)

Publication Number Publication Date
TW201945898A true TW201945898A (en) 2019-12-01

Family

ID=68294182

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108112650A TW201945898A (en) 2018-04-27 2019-04-11 Information processing device and program

Country Status (3)

Country Link
JP (1) JP7253325B2 (en)
TW (1) TW201945898A (en)
WO (1) WO2019208152A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220142334A1 (en) * 2019-02-22 2022-05-12 Shiseido Company, Ltd. Information processing apparatus and computer program

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3399493A4 (en) * 2015-12-28 2018-12-05 Panasonic Intellectual Property Management Co., Ltd. Makeup simulation assistance device, makeup simulation assistance method, and makeup simulation assistance program
JPWO2018029963A1 (en) * 2016-08-08 2019-06-06 パナソニックIpマネジメント株式会社 MAKE-UP SUPPORT DEVICE AND MAKE-UP SUPPORT METHOD

Also Published As

Publication number Publication date
WO2019208152A1 (en) 2019-10-31
JP2019192072A (en) 2019-10-31
JP7253325B2 (en) 2023-04-06

Similar Documents

Publication Publication Date Title
US12118602B2 (en) Recommendation system, method and computer program product based on a user's physical features
US7006657B2 (en) Methods for enabling evaluation of typological characteristics of external body portion, and related devices
US10559102B2 (en) Makeup simulation assistance apparatus, makeup simulation assistance method, and non-transitory computer-readable recording medium storing makeup simulation assistance program
KR102505864B1 (en) Makeup support method of creating and applying makeup guide content for user's face image with realtime
Borgianni et al. Exploratory study on the perception of additively manufactured end-use products with specific questionnaires and eye-tracking
JPWO2019146405A1 (en) Information processing equipment, information processing systems, and programs for evaluating the reaction of monitors to products using facial expression analysis technology.
JP2017120595A (en) Method for evaluating state of application of cosmetics
TW201945898A (en) Information processing device and program
JP7406502B2 (en) Information processing device, program and information processing method
JP7487168B2 (en) Information processing device, program, information processing method, and information processing system
US20200210872A1 (en) System and method for determining cosmetic outcome evaluation
WO2020261531A1 (en) Information processing device, method for generating learned model of make-up simulation, method for realizing make-up simulation, and program
JP6583754B2 (en) Information processing device, mirror device, program
JP2019212325A (en) Information processing device, mirror device, and program
JP2020022681A (en) Makeup support system, and makeup support method
JP2020048871A (en) Information processing device and program
Yumurtaci A theoretical framework for the evaluation of virtual reality technologies prior to use: A biological evolutionary approach based on a modified media naturalness theory
US20240135424A1 (en) Information processing apparatus, information processing method, and program
WO2023228931A1 (en) Information processing system, information processing device, information processing method, and program
JP2023066156A (en) Hair coloring product selection apparatus and hair coloring product selection method
JP2024028059A (en) Method for supporting visualization of hairstyle, information processing apparatus, and program
JP2022033021A (en) Ornaments or daily necessaries worn on face or periphery of face, method for evaluating matching degree of makeup or hairstyle to face of user, system for evaluating the matching degree, recommendation system and design system of spectacles
KR20230154379A (en) System and method for supporting user decision-making based on scent of beauty products on metaverse commerce platform
KR20240012969A (en) Method for cosmetic surgery to face and diagnosis thereof
CA3218635A1 (en) Computer-based body part analysis methods and systems