WO2019208152A1 - Dispositif de traitement d'informations et programme - Google Patents

Dispositif de traitement d'informations et programme Download PDF

Info

Publication number
WO2019208152A1
WO2019208152A1 PCT/JP2019/015012 JP2019015012W WO2019208152A1 WO 2019208152 A1 WO2019208152 A1 WO 2019208152A1 JP 2019015012 W JP2019015012 W JP 2019015012W WO 2019208152 A1 WO2019208152 A1 WO 2019208152A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
line
sight
user
information
Prior art date
Application number
PCT/JP2019/015012
Other languages
English (en)
Japanese (ja)
Inventor
純一郎 河原
智穂 薮崎
恵子 小林
史 今井
Original Assignee
株式会社 資生堂
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社 資生堂 filed Critical 株式会社 資生堂
Publication of WO2019208152A1 publication Critical patent/WO2019208152A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A45HAND OR TRAVELLING ARTICLES
    • A45DHAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
    • A45D44/00Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the present invention relates to an information processing apparatus and a program.
  • the selected makeup simulation image is subsequently selected. There are cases where the cap simulation image feels unsuitable for the user's preference, or that the makeup simulation image that has not been selected meets the user's preference.
  • An object of the present invention is to present a user's potential evaluation of the face that the user observes.
  • One embodiment of the present invention provides: Means for presenting an image including a face to a user; Means for acquiring line-of-sight information relating to the movement of the user's line of sight, A means for calculating a user's gaze distribution in the image based on the gaze information; Means for determining recommendation information about the image based on the gaze distribution; Comprising means for presenting recommendation information; Information processing apparatus.
  • FIG. 10 is a diagram illustrating an example of a screen displayed in information processing according to a second modification.
  • FIG. 10 is a diagram illustrating an example of a screen displayed in information processing according to a second modification.
  • FIG. 10 is explanatory drawing of the outline
  • FIG. It is a flowchart of the process of the calculation of the gaze parameter of 4th Embodiment. It is explanatory drawing of calculation of the gaze pattern of FIG. It is explanatory drawing of calculation of the gaze pattern of FIG.
  • 1 and 2 are block diagrams showing the configuration of the information processing system according to the first embodiment.
  • the information processing system 1 includes a client device 10, an eye tracker 20, and a server 30.
  • the client device 10 and the server 30 are connected via a network (for example, the Internet or an intranet) NW.
  • NW for example, the Internet or an intranet
  • the eye tracker 20 is connected to the client device 10.
  • the client device 10 is an example of an information processing device that transmits a request to the server 30.
  • the client device 10 is, for example, a smartphone, a tablet terminal, or a personal computer.
  • the user U can give a user instruction to the client device 10.
  • the eye tracker 20 is configured to detect the movement of the line of sight of the user U and generate an eye tracking signal related to the movement of the line of sight (“example of line of sight information”). Specifically, the eye tracker 20 measures coordinates indicating the position of the user's line of sight at every predetermined time. The eye tracker 20 transmits an eye tracking signal to the client device 10.
  • the server 30 is an example of an information processing apparatus that provides the client apparatus 10 with a response corresponding to the request transmitted from the client apparatus 10.
  • the server 30 is, for example, a web server.
  • the client device 10 includes a storage device 11, a processor 12, an input / output interface 13, a communication interface 14, a display 15, and a camera 16.
  • the storage device 11 is configured to store a program and data.
  • the storage device 11 is, for example, a combination of a ROM (Read Only Memory), a RAM (Random Access Memory), and a storage (for example, a flash memory or a hard disk).
  • the programs include, for example, the following programs.
  • -OS Operating System
  • program-Program for application that executes information processing (for example, application for make-up simulation)
  • the data includes, for example, the following data.
  • -Database referenced in information processing-Data obtained by executing information processing that is, information processing execution result
  • the processor 12 is configured to realize the function of the client device 10 by starting a program stored in the storage device 11.
  • the processor 12 is an example of a computer.
  • the input / output interface 13 acquires a user instruction from an input device connected to the client apparatus 10, outputs information to an output device connected to the client apparatus 10, and acquires an eye tracking signal from the eye tracker 20.
  • the input device is, for example, a keyboard, a pointing device, a touch panel, or a combination thereof.
  • the output device is, for example, the display 15.
  • the communication interface 14 is configured to control communication between the client device 10 and the server 30.
  • the display 15 is configured to display an image generated by the processor 12.
  • the camera 16 is configured to capture an image (for example, an image of the face of the user of the client device 10).
  • the server 30 includes a storage device 31, a processor 32, an input / output interface 33, and a communication interface 34.
  • the storage device 31 is configured to store programs and data.
  • the storage device 31 is, for example, a combination of ROM, RAM, and storage (for example, flash memory or hard disk).
  • the programs include, for example, the following programs.
  • -OS program-Application program that executes information processing
  • the data includes, for example, the following data. ⁇ Database referenced in information processing ⁇ Results of information processing
  • the processor 32 is configured to realize the function of the server 30 by starting a program stored in the storage device 31.
  • the processor 32 is an example of a computer.
  • the input / output interface 33 is configured to acquire a user instruction from an input device connected to the server 30 and output information to an output device connected to the server 30.
  • the input device is, for example, a keyboard, a pointing device, a touch panel, or a combination thereof.
  • the output device is, for example, a display.
  • the communication interface 34 is configured to control communication between the server 30 and the client device 10.
  • FIG. 3 is an explanatory diagram of an overview of the first embodiment.
  • the simulation image SIMG containing the face of a person for example, user U
  • the right eye RE of the user U is arranged on the right side and the left eye LE of the user U is arranged on the left side as shown in a mirror image.
  • an eye tracking signal related to the movement of the line of sight of the user U while observing the simulation image SIMG is generated.
  • the line of sight of the first region GR1 located on the right side that is, including the right eye RE
  • image space the coordinate space
  • the potential evaluation of the user U with respect to the simulation image SIMG is reflected based on the gaze distribution (that is, gaze bias) of the user U in the image space of one simulation image SIMG.
  • Present recommendation information that is, gaze bias
  • FIG. 4 is a diagram illustrating a data structure of the user information database according to the first embodiment.
  • the user information database in FIG. 4 stores user information related to users.
  • the user information database includes a “user ID” field, a “user name” field, a “user image” field, and a “user attribute” field. Each field is associated with each other.
  • a user ID for identifying a user (an example of “user identification information”) is stored.
  • image data of an image including the user's face is stored.
  • the “user attribute” field includes a “sex” field, an “age” field, and an “occupation” field.
  • the “gender” field stores information related to the user's gender.
  • FIG. 5 is a diagram illustrating a data structure of the makeup pattern information master table according to the first embodiment.
  • the makeup pattern information master table of FIG. 5 stores makeup pattern information related to the makeup pattern.
  • the makeup pattern information master table includes a “pattern ID” field, a “pattern name” field, and an “item” field.
  • a makeup pattern ID for identifying a makeup pattern (an example of “makeup pattern identification information”) is stored.
  • pattern name information (for example, text) regarding the pattern name of the makeup pattern is stored.
  • the “Item” field item information related to makeup items used in the makeup simulation is stored.
  • the “item” field includes a “category” field, an “item ID” field, and a “parameter” field.
  • the “category” field, the “item ID” field, and the “parameter” field are associated with each other.
  • an item ID (an example of “makeup item identification information”) for identifying a makeup item is stored.
  • a makeup parameter related to the effect of using the makeup item is stored.
  • the makeup parameter is, for example, an image processing parameter (for example, a color conversion parameter for converting color information of pixels of the original image) to be applied to the original image (for example, an image of the face of the user U).
  • FIG. 6 is a diagram illustrating a data structure of the simulation log information database according to the first embodiment.
  • the simulation log information database of FIG. 6 information related to the result of the make-up simulation is stored in time series.
  • the simulation log information database includes a “simulation ID” field, a “simulation execution date” field, a “simulation condition” field, an “eye tracking signal” field, and a “line-of-sight distribution” field.
  • the simulation log information database is associated with the user ID.
  • simulation ID a simulation ID for identifying a makeup simulation (an example of “simulation identification information”) is stored.
  • the “Simulation condition” field information related to the simulation condition of the make-up simulation is stored.
  • the “simulation condition” field includes an “original image” field and a “pattern” field.
  • image data of an original image (for example, an image of the face of the user U) that is a target of the make-up simulation is stored.
  • a makeup pattern ID of a makeup pattern applied to the image data in the “original image” field is stored.
  • the eye tracking signal generated by the eye tracker 20 is stored in the “eye tracking signal” field.
  • the “eye tracking signal” field includes a “measurement coordinate” field and a “measurement time” field.
  • the “measurement coordinate” field and the “measurement time” field are associated with each other.
  • measurement coordinates indicating the position of the user's line of sight in the image space are stored.
  • the “measurement time” field stores measurement time information related to the timing when the eye tracker 20 generates the eye tracking signal.
  • the measurement time information is, for example, an elapsed time from when the eye tracker 20 starts generating an eye tracking signal.
  • the movement of the line of sight of the user U is specified by the combination of the coordinates of the “measurement coordinate” field and the information of the “measurement time” field.
  • the line-of-sight distribution In the “line-of-sight distribution” field, the line-of-sight distribution derived from the information in the “eye tracking signal” field (an example of “line-of-sight parameter”) is stored.
  • the line-of-sight distribution indicates the distribution of the line of sight of the user in the image space of the makeup simulation image.
  • the “line-of-sight distribution” field includes an “R distribution” field and an “L distribution” field.
  • the distribution ratio of the user's line of sight in the first region on the right side (that is, the right side from the observer) with respect to the makeup simulation image is stored. Is done.
  • the distribution ratio of the user's line of sight in the second region on the left side (that is, the left side from the observer) with respect to the makeup simulation image is stored. Is done.
  • FIG. 7 is a flowchart of information processing according to the first embodiment.
  • 8 to 10 are diagrams showing examples of screens displayed in the information processing of FIG.
  • FIG. 11 is a detailed flowchart of the line-of-sight parameter calculation of FIG.
  • FIG. 12 is an explanatory diagram of the calculation of the line-of-sight parameter in FIG.
  • the client device 10 executes determination of an original image (S100). Specifically, the processor 12 displays the screen P100 (FIG. 8) on the display 15.
  • Screen P100 includes an image object IMG100.
  • the image object IMG100 includes an image of the face of the user U.
  • the image object IMG100 is, for example, one of the following.
  • An image stored in advance in the storage device 11 An image reproduced from the image data in the “user image” field of the user information database (FIG. 4)
  • the client apparatus 10 executes generation of a make-up simulation image (S101). Specifically, the processor 12 specifies an arbitrary record in the makeup pattern information master table (FIG. 5). The processor 12 generates a makeup simulation image by applying the information in the “parameter” field of the identified record to the simulation image SIMG determined in step S100.
  • step S101 the client device 10 performs presentation of a make-up simulation image (S102). Specifically, the processor 12 displays the screen P101 (FIG. 8) on the display 15.
  • Screen P101 includes an image object IMG101.
  • the image object IMG101 is a make-up simulation image generated in step S101.
  • step S102 the client device 10 performs line-of-sight parameter calculation (S103).
  • the client device 10 executes eye tracking signal acquisition (S1030). Specifically, the processor 12 acquires an eye tracking signal from the eye tracker 20. The processor 12 stores the measurement coordinate and the measurement time in the storage device 11 in association with each other based on the eye tracking signal.
  • the client apparatus 10 performs image space division (S1031). Specifically, as shown in FIG. 12, the processor 12 calculates the coordinates of the reference line of the image space (for example, a line connecting the midpoint of the line connecting both eyes and the center of the nose) IRL. The processor 12 divides the image space of the simulation image SIMG into the first area A1 and the second area A2 using the reference line IRL as a boundary line.
  • the first region A1 is a region on the right side of the simulation image SIMG (that is, a region including the right eye RE).
  • the second area A2 is an area on the left side of the simulation image SIMG (that is, an area including the left eye LE).
  • the client apparatus 10 executes area coordinate specification (S1032). Specifically, the processor 12 specifies the measurement coordinates included in the first area A1 among the measurement coordinates stored in the storage device 11 in step S1030 as the first area coordinates C1, and is included in the second area A2. To be measured is specified as the second region coordinates C2.
  • the client apparatus 10 performs a gaze distribution rate calculation (S1033).
  • the processor 12 uses the expression 1 to calculate the ratio E1 in which the user U points the line of sight to the first area A1 (hereinafter referred to as “first line-of-sight distribution ratio”) E1, and the user U to the second area A2.
  • a ratio (hereinafter referred to as “second line-of-sight distribution ratio”) E2 in which the line of sight is directed is calculated.
  • E2 second line-of-sight distribution rate N (C1): total number of first region coordinates C1 N (C2): total number of second region coordinates C2
  • step S101 to S103 corresponds to the process of the make-up simulation of the first embodiment. That is, the makeup simulation is executed a plurality of times for each of the plurality of makeup patterns.
  • the processor 12 applies a makeup pattern different from the makeup pattern used in the first step S101 to the original image, thereby making the makeup obtained in the first step S101.
  • a make-up simulation image different from the cap simulation image is generated.
  • the processor 12 displays the screen P102 (FIG. 9) on the display 15.
  • Screen P102 includes an image object IMG102.
  • the image object IMG102 is a make-up simulation image generated in the second step S101.
  • the processor 12 presents a plurality of makeup simulation images individually and calculates a gaze distribution for each makeup simulation image. Thereby, the line-of-sight distribution for each makeup simulation image is obtained.
  • step S103 the client device 10 performs line-of-sight parameter presentation (S104). Specifically, the processor 12 displays the screen P103 (FIG. 10) on the display 15.
  • Screen P103 includes display objects A103a to A103b and image objects IMG101 and IMG102.
  • information for example, a pattern name
  • the display object A103a information (for example, a pattern name) related to the makeup pattern used to generate the image object IMG101, and the first gaze distribution rate E1 and the second gaze distribution rate E2 for the image object IMG101 are displayed.
  • the display object A103b information (for example, a pattern name) related to the makeup pattern used to generate the image object IMG102, and the first gaze distribution rate E1 and the second gaze distribution rate E2 for the image object IMG102 are displayed.
  • step S104 the client device 10 executes recommendation information determination (S105).
  • the processor 12 determines the line-of-sight distribution ratio (that is, the first line-of-sight distribution ratio E1 and the first line-of-sight distribution ratio E1 displayed on the display objects A103a to A103b of the screen P103) for the plurality of makeup simulation images calculated in step S103.
  • the makeup simulation image having the largest first gaze distribution rate E1 is selected from the two gaze distribution rates E2).
  • the processor 12 determines the line-of-sight distribution ratio (that is, the first line-of-sight distribution ratio E1 and the first line-of-sight distribution ratio E1 displayed on the display objects A103a to A103b of the screen P103) for the plurality of makeup simulation images calculated in step S103.
  • At least one makeup simulation image having a first line-of-sight distribution ratio equal to or greater than a predetermined value is selected from the two line-of-sight distribution ratio E2). Note that when the first line-of-sight distribution ratio E1 of all makeup simulation images is less than a predetermined value, recommendation information is not presented.
  • the makeup simulation image having the largest first line-of-sight distribution ratio E1 in the first example means a makeup simulation image in which the right eye side is most watched among a plurality of makeup simulation images.
  • the makeup simulation image having the largest first gaze distribution rate E1 is an image having a high potential evaluation by the user U (for example, an image having the highest interest, an image that feels the most, an image that feels the most affinity). Or the most spectacular image).
  • the at least one makeup simulation image in which the first line-of-sight distribution rate E1 in the second example is equal to or greater than a predetermined value means a makeup simulation image in which the right eye side is watched more than a certain value among a plurality of makeup simulation images.
  • a makeup simulation image having a first gaze distribution rate E1 of a predetermined value or more is an image in which the potential evaluation of the user U exceeds a certain standard (for example, an image with a certain degree of interest or a certain degree of personality). Or an image that feels a certain degree of affinity, or an image that feels a certain degree of impression).
  • the first gaze distribution rate E1 of the image object IMG101 is the largest of the image objects IMG101 and IMG102, and therefore a makeup simulation image corresponding to the image object IMG101 (that is, the makeup of the pattern name “MC1”) Makeup simulation image to which the pattern is applied) is determined as recommendation information.
  • the client apparatus 10 presents recommendation information (S106). Specifically, the processor 12 displays the screen P104 (FIG. 10) on the display 15.
  • Screen P104 includes image object IMG101, display objects A104a to A104b, and operation objects B104a to B104b.
  • the display object A104a displays the pattern name of the makeup pattern used for generating the makeup simulation image.
  • item information for example, item category and item ID
  • the operation object B104a is an object that receives a user instruction for updating the database on the server 30 using the recommendation information (for example, the contents of the image object IMG101 and the display objects A104a to A104b).
  • the operation object B104b is an object that receives a user instruction to access a website that sells items displayed on the display object A104b.
  • a URL Uniform Resource Locator
  • the client device 10 executes an update request (S107). Specifically, the processor 12 transmits update request data to the server 30.
  • the update request data includes the following information. ⁇ User ID of user U Image data determined in step S100 Makeup pattern ID referenced in step S101 Information relating to the execution date of step S103 Measurement coordinates and measurement time stored in the storage device 11 in step S1030 First and second line-of-sight distribution rates E1 and E2 calculated in step S1033
  • the server 30 executes database update (S300). Specifically, the processor 32 adds a new record to the simulation log information database (FIG. 6) associated with the user ID included in the update request data. The following information is stored in each field of the new record.
  • the simulation ID a new simulation ID is stored.
  • the simulation execution date information related to the execution date included in the update request data is stored.
  • image data included in the update request data is stored.
  • pattern ID a makeup pattern ID included in the update request data is stored.
  • measurement coordinates included in the update request data are stored.
  • the measurement time included in the update request data is stored.
  • the “R distribution” field the first line-of-sight distribution rate E1 included in the update request data is stored.
  • the “L distribution” field the second line-of-sight distribution rate E2 included in the update request data is stored.
  • recommendation information for example, a plurality of make-up simulation images
  • the face image eg, make-up simulation image
  • a make-up simulation image having a high potential evaluation of the user U is presented.
  • the information with high potential evaluation of the user U with respect to the image of the face which the user U observes can be shown.
  • the second embodiment is an example in which recommendation information is presented based on gaze bias in a mirror image of one's face.
  • FIG. 13 is an external view of a client device according to the second embodiment.
  • the client device 10 further includes a half mirror 17.
  • the half mirror 17 is disposed on the display 15. Since the half mirror 17 reflects external light and transmits light emitted from the display 15, the user U can observe the mirror image MI and the simulation image SIMG at the same time.
  • FIG. 14 is a flowchart of information processing according to the second embodiment.
  • FIG. 15 is a detailed flowchart of the line-of-sight parameter calculation of FIG.
  • FIG. 16 is an explanatory diagram of the calculation of the line-of-sight parameter in FIG.
  • FIG. 17 is a diagram illustrating an example of a screen displayed in the information processing of FIG.
  • the client device 10 performs line-of-sight parameter calculation (S110).
  • the client device 10 performs user image capturing (S ⁇ b> 1100) after step S ⁇ b> 1030 (FIG. 11). Specifically, the camera 16 captures an image of the face of the user U who is observing the mirror image MI reflected on the half mirror 17. The processor 12 generates image data of an image acquired by the camera 16.
  • the client apparatus 10 executes mirror image coordinate calculation (S1101). Specifically, the processor 12 analyzes the feature amount of the image data generated in step S1100, thereby determining the coordinates in the image space of each part (for example, contour, eyes, nose, and mouth) of the user U's face. Calculate the size. The processor 12 calculates the distance between the half mirror 17 and the user U based on the coordinates and size of each part of the face and the position of the camera 16 in the client device 10. Based on the calculated distance and the coordinates in the image space, the processor 12 calculates the coordinates of each part of the face (hereinafter referred to as “mirror image coordinates”) in the coordinate space of the mirror image MI (hereinafter referred to as “mirror image space”).
  • mirror image coordinates the coordinate space of the mirror image MI
  • the client apparatus 10 executes division of the mirror image space (S1102). Specifically, as shown in FIG. 16, the processor 12 determines the reference line of the mirror image space (for example, the midpoint of the line connecting both eyes and the center of the nose) based on the mirror image coordinates of each part calculated in step S1101. The line connecting M, and M) is calculated. The processor 12 divides the mirror image space into the first area A1 and the second area A2 with the reference line MRL as a boundary line.
  • the first area A1 is an area located on the right side of the mirror image MI from the user U (that is, including the right eye RE).
  • the second area A2 is an area located on the left side of the mirror image MI from the user U (that is, including the left eye LE).
  • step S1102 the client device 10 executes steps S1032 to S1033 (FIG. 11).
  • the client apparatus 10 executes recommendation information determination (S111). Specifically, the processor 12 calculates the score S related to the evaluation of the user U with respect to the mirror image MI by applying the first line-of-sight distribution rate E1 and the second line-of-sight distribution rate E2 calculated in step S110 to Equation 2.
  • S Score a a coefficient of the first gaze distribution rate b b coefficient of the second gaze distribution rate
  • step S111 recommendation information is presented (S112). Specifically, the processor 12 displays the screen P110 (FIG. 17A) on the display 15.
  • the screen P110 includes a display object A110.
  • the display object A110 displays the score S calculated in step S111.
  • the processing results of steps S110 to S112 also change.
  • the processor 12 displays the screen P111 (FIG. 17B) on the display 15.
  • the screen P111 includes a display object A111.
  • the score S calculated in step S111 is displayed on the display object A111. This score S is different from the score S displayed on the display object A110.
  • recommendation information for example, a score S relating to the user U's evaluation with respect to the mirror image MI.
  • the score S is the user U for the make-up using each make-up item. Represents a potential evaluation of. That is, the score S represents the reaction of the user U for each makeup item.
  • the score S is a mirror image MI (that is, the user U from the start to the end of the make-up) Represents a user's potential evaluation of the face in the make-up). That is, the score S represents the transition of the reaction of the user U with respect to the quality of the make-up during the period from when the user U starts making up to when it ends.
  • FIG. 13 shows an example in which the client device 10 includes the half mirror 17, the second embodiment is not limited to this.
  • the half mirror 17 may be a mirror that totally reflects light, and the display 15 may be disposed outside the client device 10. In this case, the user U observes a mirror image of his / her face reflected in the mirror.
  • the screens P110 to P111 are displayed on a display arranged outside the client device 10.
  • the third embodiment is an example of presenting a user's potential evaluation for an avatar image (hereinafter referred to as an “avatar image”) that behaves as an incarnation of the user in a computer space.
  • avatar image a user's potential evaluation for an avatar image
  • FIG. 18 is a flowchart of information processing according to the third embodiment.
  • 19 to 20 are diagrams showing examples of screens displayed in the information processing of FIG.
  • the information processing of FIG. 18 is executed before or during the play of the computer game.
  • the client device 10 presents an avatar image (S120). Specifically, the processor 12 displays the screen P120 (FIG. 19) on the display 15.
  • Screen P120 includes an image object IMG120 and a message object M120.
  • the image object IMG120 is an avatar image.
  • the message object M120 is a message for guiding the user U's line of sight to the image object IMG101.
  • step S120 the client device 10 executes step S103 (FIG. 7).
  • step S120 and S103 in FIG. 18 is executed a plurality of times for each of the plurality of avatar images.
  • the processor 12 displays the screen P121 (FIG. 19) on the display 15.
  • Screen P121 includes an image object IMG121 and a message object M120.
  • the image object IMG121 is an avatar image different from the avatar image presented in the first step S120.
  • the client apparatus 10 performs recommendation information determination (S121). Specifically, the processor 12 applies the first line-of-sight distribution rate E1 and the second line-of-sight distribution rate E2 of each avatar image to Equation 3 to thereby determine the self-identification index Si (an example of recommendation information) of each avatar image.
  • the self-identification index Si is an index relating to a level at which the user U feels “the incarnation” of the avatar image.
  • ⁇ Si Score ⁇ ⁇ : Coefficient of first gaze distribution rate
  • ⁇ ⁇ Coefficient of second gaze distribution rate
  • step S121 recommendation information is presented (S122). Specifically, the processor 12 displays the screen P122 (FIG. 20) on the display 15.
  • Screen P122 includes display objects A122a to A122b, image objects IMG120 to IMG121, and operation objects B122a to B122b.
  • the display object A 122a includes the avatar name of the avatar image corresponding to the image object IMG 120, the first gaze distribution rate E1 and the second gaze distribution rate E2 with respect to the image object IMG 120, and the self-identification index Si of the image object IMG 120.
  • the display object A 122b includes the avatar name of the avatar image corresponding to the image object IMG121, the first gaze distribution rate E1 and the second gaze distribution rate E2 with respect to the image object IMG121, and the self-identification index Si of the image object IMG121. Is displayed.
  • step S122 the client device 10 executes an update request (S123). Specifically, the processor 12 displays the screen P123 on the display 15.
  • the screen P123 includes a display object A123, operation objects B123a to B123b, and an image object IMG123.
  • information on the avatar image having the highest self-identification index Si calculated in step S121 (avatar name, first gaze distribution rate E1, second gaze distribution rate E2, and self-identification index Si).
  • the image object IMG123 is an avatar image having the highest self-identification index Si.
  • the operation object B123a is an object that receives a user instruction for confirming use of the image object IMG123 as an avatar image.
  • the operation object B123b is an object that accepts a user instruction for refusing use of the image object IMG123 as an avatar image.
  • the processor 12 transmits update request data to the server 30.
  • the update request data includes the following information. ⁇ User ID Image data of an avatar image assigned to the image object IMG123
  • the server 30 executes database update (S300). Specifically, the processor 32 refers to the user information database (FIG. 4) and specifies a record associated with the user ID included in the update request data. The processor 32 stores the image data included in the update request data in the “user image” field of the identified record. Thereby, the user U can use the avatar image having the highest self-identification index Si.
  • the processor 32 refers to the user information database (FIG. 4) and specifies a record associated with the user ID included in the update request data.
  • the processor 32 stores the image data included in the update request data in the “user image” field of the identified record. Thereby, the user U can use the avatar image having the highest self-identification index Si.
  • a user's potential evaluation for an avatar image can be presented. This allows the user to have the highest potential avatar (e.g., the most interested avatar, the avatar that feels most personal (ie, can be identified with her), the avatar that feels most compatible, or most An avatar that feels impressive can be easily selected.
  • the highest potential avatar e.g., the most interested avatar, the avatar that feels most personal (ie, can be identified with her)
  • the avatar that feels most compatible ie, can be identified with her
  • most An avatar that feels impressive can be easily selected.
  • the fourth embodiment is an example in which a user's potential evaluation for an image is presented based on a line-of-sight movement pattern.
  • FIG. 23 is an explanatory diagram of an outline of the fourth embodiment.
  • a simulation image SIMG including the face of a person for example, user U
  • an eye tracking signal related to the movement of the line of sight of the user U while observing the simulation image SIMG is generated.
  • a line-of-sight pattern (an example of “line-of-sight parameter”) relating to the movement pattern of the line of sight of the user U in the coordinate space of the pixels constituting the simulation image SIMG (hereinafter referred to as “image space”) calculate.
  • image space a line-of-sight pattern relating to the movement pattern of the line of sight of the user U in the coordinate space of the pixels constituting the simulation image SIMG
  • recommendation information reflecting the potential evaluation of the user U with respect to the simulation image SIMG is presented based on the line-of-sight pattern of the user U in the image space of the simulation image SIMG.
  • FIG. 24 is a flowchart of the line-of-sight parameter calculation process according to the fourth embodiment.
  • 25 and 26 are explanatory diagrams of the calculation of the line-of-sight pattern in FIG.
  • the client device 10 performs line-of-sight pattern calculation (S1034) after step S1030.
  • the processor 12 calculates at least one of the following gaze patterns instead of the gaze distribution. ⁇ Stopping time when the line of sight stops in a predetermined size area (hereinafter referred to as “stop area”) ⁇ Number of stops when the line of sight stops in the stopping area ⁇ Gaze movement range ⁇ Gaze movement order ⁇ Gaze movement area
  • the processor 12 calculates the stop time Ts using Equation 4.
  • Ts Stop time n: Number of measurement coordinates included in the stop area t: Measurement time interval
  • the line of sight EM with respect to the image FIMG of the user U is, for example, 5 of the stop area 1
  • the vehicle stops in the stopping area 2 for 10 seconds.
  • the stop time Ts1 of the stop region 1 is 5 seconds
  • the stop time Ts2 of the stop region 2 is 10 seconds
  • the total stop time ⁇ Ts is 15 seconds.
  • the processor 12 stores the measurement time and the stop time in the storage device 11 in association with each other.
  • the processor 12 calculates the number of stops Ns using Equation 5. -Ns ... Number of stops-ns-Number of times included in the fixed region for a fixed time In the example of Fig. 25A, the number of stops is two.
  • the processor 12 stores the measurement time and the number of stops in the storage device 11 in association with each other.
  • the processor 12 calculates a rectangular area defined by an X coordinate and a Y coordinate (hereinafter referred to as “end point coordinates”) located at the end points of the measurement coordinates as a movement range.
  • the moving range is a rectangular area Z defined by the end point coordinates ⁇ (X1, Y1), (X1, Y2), (X2, Y1), (X2, Y2) ⁇ .
  • the processor 12 stores the end point coordinates defining the rectangular area Z in the storage device 11.
  • the processor 12 calculates the movement order by the following method.
  • the processor 12 converts the pixel area of the image FIMG into pixel areas (for example, a left eye pixel area, a right eye pixel area, a nose pixel area, and a mouth pixel) for each part of the face (for example, left eye, right eye, nose, and mouth). Area).
  • the processor 12 calculates the order in which the line of sight EM passes through each pixel region (hereinafter referred to as “movement order”). In the example of FIG. 25C, the movement order is ⁇ right-eye pixel region, nose pixel region, mouth pixel region, and left-eye pixel region ⁇ .
  • the processor 12 stores the movement order in the storage device 11.
  • the processor 12 calculates the moving area based on any of the following. Number of measurement coordinates in XY space (however, overlapping measurement coordinates are counted as one coordinate) (FIG. 26A) When the pixel area of the image FIMG is divided into sections SEC divided by a predetermined area, the total number of sections SEC including measurement coordinates (FIG. 26B) The processor 12 stores the moving area in the storage device 11.
  • the storage device 11 stores a line-of-sight pattern evaluation model.
  • the line-of-sight pattern evaluation model is a model that receives a line-of-sight pattern as an input and outputs a face evaluation as an output.
  • the processor 12 of the client device 10 refers to the visual line pattern evaluation model stored in the storage device 11 in step S105, and calculates an evaluation corresponding to the visual line pattern calculated in step S103.
  • the potential evaluation of the user U with respect to the image FIMG can be presented.
  • the stop time is calculated as the make-up proceeds.
  • the measurement time is early (for example, from the start of measurement until a predetermined time of the total measurement time (for example, 1/10 of the total measurement time) elapses)
  • the retention time is long. In this case, it means that the user U feels uncomfortable with the quality of the makeup immediately after starting the makeup.
  • the processor 12 determines that the evaluation of the user U with respect to the make-up item is low, and recommends information based on the evaluation (for example, a message that the user U feels uncomfortable with the quality of the make-up). Present.
  • the processor 12 determines that the evaluation of the user U with respect to the makeup item is high, and presents recommendation information based on the evaluation (for example, a message indicating that the user U is satisfied with the quality of the makeup). To do.
  • the processor 12 determines that the evaluation of the user U with respect to the make-up item is low, and recommends information based on the evaluation (for example, a message that the user U feels uncomfortable with the quality of the make-up). Present.
  • the processor 12 determines that the evaluation of the user U with respect to the makeup item is high, and presents recommendation information based on the evaluation (for example, a message indicating that the user U is satisfied with the quality of the makeup). To do.
  • the moving range is calculated as the make-up proceeds.
  • makeup assessment specialists look at the entire face.
  • a person tends to look partially when looking at his / her face while looking at the face of others. In other words, seeing the entire face means looking at your face objectively.
  • the processor 12 presents the degree of coincidence between the evaluation by the observer and the evaluation by others as recommendation information by calculating the movement range.
  • the order of movement of the line of sight EM when the user U is looking at the image FIMG is calculated in order to evaluate the performance of the makeup after the makeup is completed.
  • the movement order represents the order in which the user U pays attention to the place where the makeup is applied. This order means the priority of the make-up characteristics for the user U.
  • the processor 12 identifies a characteristic having a high priority for the user U based on the movement order, and presents a makeup item that matches the characteristic having a high priority as recommendation information.
  • the moving area is calculated as the make-up proceeds.
  • the movement area represents the degree of coincidence between self-evaluation and other person's evaluation.
  • the processor 12 presents the degree of coincidence between the self-evaluation and the other person's evaluation as recommendation information by calculating the moving area.
  • the fourth embodiment can also be applied to the second embodiment.
  • the potential evaluation of the user U with respect to the mirror image MI can be presented even when a line-of-sight parameter other than the line-of-sight distribution is used.
  • Modification 1 is an example of guiding the user's line of sight in the presentation of the make-up simulation image of FIG. 7 (S102).
  • FIG. 21 is a diagram illustrating a screen example of the first modification. Modification 1 is applicable to any of the first to third embodiments.
  • step S102 of the first modification the processor 12 displays the screen P101a (FIG. 21) on the display 15.
  • Screen P101a is different from screen P101 (FIG. 8) in that image object IMG101a is displayed.
  • the image object IMG101a is an object for guiding the user U's line of sight.
  • the image object IMG101a is displayed in the second area A2 of the image object IMG101.
  • the left visual field is related to the entire processing
  • the right visual field is related to the partial processing.
  • the image object IMG101a for guiding the line of sight is displayed in the second area A2, so that when the user U observes his / her face, the entire process becomes dominant, and a more objective self-evaluation is performed. Is possible.
  • One of the purposes of make-up is to create an impression on others.
  • By improving the objectivity of self-evaluation the difference between self-evaluation and others' evaluation is reduced. As a result, if the self-evaluation is increased by the makeup of his / her face, the evaluation of the makeup by others is also improved.
  • the difference between self-evaluation and other-party evaluation is smaller when the line of sight is guided than when the line of sight is not guided. Therefore, the numerical value can be used to show the effect when the line of sight is guided (the degree of reduction of the deviation between the other person's evaluation and the self-evaluation due to the improvement of the objectivity).
  • the degree of coincidence between the self-evaluation and the other person's evaluation can be calculated using the moving range or moving area, and the effect of guiding the line of sight can be shown using the calculation result.
  • Modification 2 is an example in which a user's potential evaluation for one image is presented.
  • FIG. 22 is a diagram illustrating an example of a screen displayed in the information processing according to the second modification. Modification 2 is applicable to any of the first to fourth embodiments.
  • Modification 2 is the same as the information processing (FIG. 7) of the first embodiment. However, step S104 (FIG. 7) is omitted.
  • step S103 the processor 12 of the client device 10 applies the line-of-sight distribution ratios calculated in step S103 (that is, the first line-of-sight distribution ratio E1 and the second line-of-sight distribution ratio E2 in FIG. 12) to Equation 2 in step S105. By doing so, the score S regarding the evaluation of the user U with respect to the simulation image SIMG is calculated.
  • step S106 the processor 12 displays the screen P130 (FIG. 22) on the display 15 instead of the screen P104 (FIG. 10).
  • the screen P130 includes display objects A103a, A104b, and A130, and an image object IMG101.
  • the score calculated in step S105 is displayed on the display object A130.
  • the score S for one makeup simulation image is displayed on the display 15. Thereby, the potential evaluation of the user U with respect to one type of image can be presented.
  • the third modification is an example in which the present embodiment is applied to a face image other than the makeup simulation image and the avatar image.
  • the present embodiment can also be applied to a face image to which an arbitrary hairstyle is applied.
  • recommendation information for example, a plurality of hair simulation images
  • the hair simulation image with high potential evaluation of the user U is presented. Thereby, the information with high potential evaluation of the user U with respect to the image of the face which the user U observes can be shown.
  • the present embodiment can also be applied to a face image that has undergone an arbitrary operation.
  • recommendation information for example, a plurality of operations
  • the face image presented to the user U for example, the image of the face after any surgery has been performed.
  • the simulation images a surgical simulation image having a high potential evaluation of the user U) is presented. Thereby, the information with high potential evaluation of the user U with respect to the image of the face which the user U observes can be shown.
  • the first aspect of this embodiment is A means for presenting an image including a face to the user (for example, the processor 12 that executes the process of step S102); Means for acquiring line-of-sight information related to the movement of the user's line of sight (for example, the processor 12 that executes the process of step S1030); Based on the line-of-sight information, a unit (for example, the processor 12 that executes the process of step S1033) that calculates the user's line-of-sight bias in the image is provided.
  • a unit for example, the processor 12 that executes the process of step S1033
  • Means for example, the processor 12 that executes the process of step S105
  • Means for determining recommendation information about the image based on the gaze bias Means for presenting recommendation information (for example, the processor 12 that executes the process of step S106);
  • An information processing apparatus for example, the client apparatus 10).
  • recommendation information to be presented to the user U is determined based on the deviation of the line of sight of the user U with respect to the face image. Thereby, the potential evaluation of the user U with respect to the face observed by the user U can be presented.
  • the second aspect of this embodiment is The means to calculate is Dividing the image space of the image into a first region and a second region at a predetermined boundary; A first line-of-sight distribution in the first region; A second line-of-sight distribution in the second region; Is an information processing apparatus for calculating
  • the recommendation information to be presented to the user U is determined based on the line-of-sight distribution in each of the two regions (first region and second region) constituting the image space. Thereby, recommendation information according to the bias of the line-of-sight distribution can be presented.
  • the third aspect of this embodiment is The presenting means presents a plurality of different images individually,
  • the means for calculating calculates a first gaze distribution and a second gaze distribution for each image,
  • the determining means is an information processing apparatus that determines at least one of the plurality of images as recommendation information.
  • the recommendation information is determined based on the line-of-sight distribution in each of the two regions (first region and second region) constituting the image space. Accordingly, at least one of the plurality of images can be presented as recommendation information.
  • the fourth aspect of this embodiment is The first area is an area on the right side of the image,
  • the second region is a region on the left side with respect to the image,
  • the recommendation information is an information processing apparatus that is at least one image in which a ratio of the first line-of-sight distribution is a predetermined value or more among a plurality of images.
  • an image in which the ratio of the line-of-sight distribution in the right region toward the image is greater than or equal to a predetermined value is presented as recommendation information.
  • the fifth aspect of this embodiment is The image includes a plurality of makeup simulation images in which different makeups are applied to the user's face,
  • the first region is a region on the right side with respect to each makeup simulation image,
  • the second region is a region on the left side with respect to each makeup simulation image,
  • the recommendation information is an information processing apparatus that is a makeup simulation image having the highest ratio of the first line-of-sight distribution among the plurality of makeup simulation images.
  • a makeup simulation image in which the ratio of the line-of-sight distribution with respect to the region on the right side with respect to each makeup simulation image is greater than or equal to a predetermined value from among the plurality of makeup simulation images.
  • the sixth aspect of this embodiment is
  • the image is a makeup simulation image in which makeup is applied to the user's face
  • the first area is an area on the right side of the makeup simulation image.
  • the second region is a region on the left side with respect to the makeup simulation image,
  • the recommendation information is an information processing apparatus that is information corresponding to the ratio of the first line-of-sight distribution in the makeup simulation image.
  • recommendation information for example, a score
  • the potential evaluation of the user U with respect to the makeup simulation image can be known.
  • the seventh aspect of this embodiment is The image includes a plurality of different avatar images,
  • the first region is a region on the right side with respect to each avatar image,
  • the second region is a region on the left side with respect to each avatar image,
  • the recommendation information is an avatar image having the highest ratio of the first line-of-sight distribution among the plurality of avatar images.
  • an avatar image in which the ratio of the line-of-sight distribution with respect to the region on the right side with respect to each avatar image is greater than or equal to a predetermined value from the plurality of avatar images is presented.
  • the eighth aspect of this embodiment is The image is an avatar image,
  • the first region is a region on the right side with respect to the avatar image,
  • the second region is a region on the left side with respect to the avatar image,
  • the recommendation information is information corresponding to the ratio of the first gaze distribution in the avatar image.
  • recommendation information for example, a score
  • the potential evaluation of the user U with respect to the avatar image can be known.
  • the ninth aspect of this embodiment is The image includes at least one of a plurality of images in which different hairstyles are applied to the user's face and an image in which different operations are performed on the user's face, Information processing apparatus.
  • recommendation information based on an image representing at least one of a face to which an arbitrary hairstyle is applied and a face after surgery is presented.
  • the potential evaluation of the user U with respect to the face can be known.
  • the tenth aspect of this embodiment is Means for acquiring line-of-sight information relating to the movement of the user's line of sight with respect to the mirror image of the user's face (for example, the processor 12 that executes the process of step S1030); Based on the line-of-sight information, a means for calculating a user's line-of-sight bias in the mirror image (for example, the processor 12 that executes the process of step S1033) is provided.
  • Means for example, a processor 12 that executes the process of step S111) for determining recommendation information related to the mirror image based on the gaze deviation; Means for presenting recommendation information (for example, the processor 12 that executes the process of step S112);
  • An information processing apparatus for example, the client apparatus 10).
  • the recommendation information is determined based on the deviation of the user's line of sight with respect to the mirror image of the face.
  • the eleventh aspect of this embodiment is The means to calculate is Dividing the mirror image space of the mirror image into a first region and a second region at a predetermined boundary line; A first line-of-sight distribution in the first region; A second line-of-sight distribution in the second region; Calculate Information processing apparatus.
  • the recommendation information to be presented to the user is determined based on the line-of-sight distribution in each of the two regions (first region and second region) constituting the mirror image space. Thereby, recommendation information according to the bias of the line-of-sight distribution can be presented.
  • the twelfth aspect of this embodiment is The first region includes the right eye, The second region includes the left eye,
  • the recommendation information is information according to the ratio of the first line-of-sight distribution and is information related to the user's evaluation with respect to the mirror image.
  • Information processing apparatus is information according to the ratio of the first line-of-sight distribution and is information related to the user's evaluation with respect to the mirror image.
  • recommendation information for example, a score
  • a mirror image for example, a mirror image of his / her face with makeup
  • the thirteenth aspect of this embodiment is Means for presenting an image of the user's face (for example, the processor 12 that executes the process of step S102); Means for acquiring line-of-sight information related to the movement of the user's line of sight (for example, the processor 12 that executes the process of step S1030); Means for calculating a line-of-sight pattern related to the pattern of movement of the user's line of sight in the image based on the line-of-sight information (for example, the processor 12 that executes the process of step S1034)
  • a means for determining recommendation information to be presented to the user based on the calculated line-of-sight pattern Means for presenting the determined recommendation information (for example, the processor 12 that executes the process of step S106);
  • An information processing apparatus for example, the client apparatus 10).
  • recommendation information to be presented to the user is determined based on the line-of-sight pattern of the user U with respect to the face image. Thereby, the potential evaluation of the user U with respect to the face observed by the user U can be presented.
  • the fourteenth aspect of this embodiment is The means for calculating calculates a stop time, which is a time when the user's line of sight stops in a stop area of a predetermined range, as a line-of-sight pattern.
  • Information processing apparatus The fourteenth aspect of this embodiment is The means for calculating calculates a stop time, which is a time when the user's line of sight stops in a stop area of a predetermined range, as a line-of-sight pattern.
  • the fifteenth aspect of this embodiment is The means for calculating calculates, as a line-of-sight pattern, the number of stops, which is the number of times that the user's line of sight has stopped in a predetermined range of stop areas.
  • Information processing apparatus calculates, as a line-of-sight pattern, the number of stops, which is the number of times that the user's line of sight has stopped in a predetermined range of stop areas.
  • the sixteenth aspect of this embodiment is The calculating means calculates an area in which the user's line of sight is distributed as a line-of-sight pattern. Information processing apparatus.
  • the seventeenth aspect of this embodiment is The means for calculating calculates the order in which the user's line of sight moves as a line-of-sight pattern.
  • Information processing apparatus The seventeenth aspect of this embodiment.
  • the eighteenth aspect of this embodiment is The calculating means calculates an area in which the user's line of sight has moved as a line-of-sight pattern. Information processing apparatus.
  • the nineteenth aspect of this embodiment is A program for causing a computer (for example, the processor 12) to function as each of the means described above.
  • the storage device 11 may be connected to the client device 10 via the network NW.
  • the storage device 31 may be connected to the server 30 via the network NW.
  • Each step of the above information processing can be executed by either the client device 10 or the server 30.
  • the makeup simulation image was illustrated as an example of the simulation image SIMG.
  • this embodiment is also applicable when the simulation image SIMG is at least one of the following. ⁇ Simulation image of face after treatment (for example, plastic surgery) ⁇ Simulation image of face including hair after coloring
  • Information processing system 10 Client device 11: Storage device 12: Processor 13: Input / output interface 14: Communication interface 15: Display 16: Camera 17: Half mirror 20: Eye tracker 30: Server 31: Storage device 32: Processor 33: I / O interface 34: Communication interface

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne un dispositif de traitement d'informations pourvu : d'un moyen pour présenter une image comprenant un visage à un utilisateur ; d'un moyen pour acquérir des informations de regard relatives au mouvement du regard de l'utilisateur ; d'un moyen pour calculer une irrégularité dans le regard de l'utilisateur dans l'image, sur la base des informations de regard ; d'un moyen pour déterminer des informations de recommandation relatives à l'image, sur la base d'une distribution du regard ; et d'un moyen pour présenter les informations de recommandation.
PCT/JP2019/015012 2018-04-27 2019-04-04 Dispositif de traitement d'informations et programme WO2019208152A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-086366 2018-04-27
JP2018086366A JP7253325B2 (ja) 2018-04-27 2018-04-27 情報処理装置、プログラム、及び、情報処理方法

Publications (1)

Publication Number Publication Date
WO2019208152A1 true WO2019208152A1 (fr) 2019-10-31

Family

ID=68294182

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/015012 WO2019208152A1 (fr) 2018-04-27 2019-04-04 Dispositif de traitement d'informations et programme

Country Status (3)

Country Link
JP (1) JP7253325B2 (fr)
TW (1) TW201945898A (fr)
WO (1) WO2019208152A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2020170845A1 (ja) * 2019-02-22 2021-12-23 株式会社 資生堂 情報処理装置、及び、プログラム

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017115453A1 (fr) * 2015-12-28 2017-07-06 パナソニックIpマネジメント株式会社 Dispositif d'aide à une simulation de maquillage, procédé d'aide à une simulation de maquillage et programme d'aide à une simulation de maquillage
WO2018029963A1 (fr) * 2016-08-08 2018-02-15 パナソニックIpマネジメント株式会社 Appareil d'aide au maquillage et procédé d'aide au maquillage

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017115453A1 (fr) * 2015-12-28 2017-07-06 パナソニックIpマネジメント株式会社 Dispositif d'aide à une simulation de maquillage, procédé d'aide à une simulation de maquillage et programme d'aide à une simulation de maquillage
WO2018029963A1 (fr) * 2016-08-08 2018-02-15 パナソニックIpマネジメント株式会社 Appareil d'aide au maquillage et procédé d'aide au maquillage

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"A study of image ranking from eye tracking data", PROC. OF THE 2011 ITE ANNUAL CONVENTION, August 2011 (2011-08-01) *
"Differences in eye fixation point distribution among the types of facial impressoin judgement and makeup", PROC OF THE 2015 ITE ANNUAL CONVENTION, 5 August 2015 (2015-08-05) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2020170845A1 (ja) * 2019-02-22 2021-12-23 株式会社 資生堂 情報処理装置、及び、プログラム
JP7487168B2 (ja) 2019-02-22 2024-05-20 株式会社 資生堂 情報処理装置、、プログラム、情報処理方法、及び、情報処理システム

Also Published As

Publication number Publication date
JP7253325B2 (ja) 2023-04-06
JP2019192072A (ja) 2019-10-31
TW201945898A (zh) 2019-12-01

Similar Documents

Publication Publication Date Title
JP6055160B1 (ja) 化粧品情報提供システム、化粧品情報提供装置、化粧品情報提供方法、及びプログラム
TWI708183B (zh) 個人化化妝資訊推薦方法
US20150234457A1 (en) System and method for content provision using gaze analysis
US10559102B2 (en) Makeup simulation assistance apparatus, makeup simulation assistance method, and non-transitory computer-readable recording medium storing makeup simulation assistance program
US11579686B2 (en) Method and device for carrying out eye gaze mapping
US20220383389A1 (en) System and method for generating a product recommendation in a virtual try-on session
AU2019240635A1 (en) Targeted marketing system and method
Wu et al. Users’ perceptions of technological features in augmented reality (AR) and virtual reality (VR) in fashion retailing: A qualitative content analysis
JP7278972B2 (ja) 表情解析技術を用いた商品に対するモニタの反応を評価するための情報処理装置、情報処理システム、情報処理方法、及び、プログラム
WO2019208152A1 (fr) Dispositif de traitement d'informations et programme
US11409823B2 (en) Information processing apparatus, coating material generator, and computer program
JP7406502B2 (ja) 情報処理装置、プログラム及び情報処理方法
JP6583754B2 (ja) 情報処理装置、ミラーデバイス、プログラム
JP6320844B2 (ja) パーツの影響度に基づいて感情を推定する装置、プログラム及び方法
JP2019212325A (ja) 情報処理装置、ミラーデバイス、プログラム
Nystad et al. A comparison of two presence measures based on experimental results
WO2020261531A1 (fr) Dispositif de traitement d'informations, procédé de génération d'un modèle appris de simulation de maquillage, procédé de réalisation d'une simulation de maquillage et programme
WO2023228931A1 (fr) Système de traitement d'informations, dispositif de traitement d'informations, procédé de traitement d'informations et programme
US20240135424A1 (en) Information processing apparatus, information processing method, and program
JP7418709B2 (ja) コンピュータプログラム、方法及びサーバ装置
US20220142334A1 (en) Information processing apparatus and computer program
Yumurtacı A theoretical framework for the evaluation of virtual reality technologies prior to use: A biological evolutionary approach based on a modified media naturalness theory
KR20230017440A (ko) 디지털 코스메틱 제조 시스템
Lee et al. Virtual reality content evaluation visualization tool focused on comfort, cybersickness, and perceived excitement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19793708

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19793708

Country of ref document: EP

Kind code of ref document: A1