WO2017103985A1 - Dispositif de fourniture d'informations et procédé de fourniture d'informations - Google Patents

Dispositif de fourniture d'informations et procédé de fourniture d'informations Download PDF

Info

Publication number
WO2017103985A1
WO2017103985A1 PCT/JP2015/085022 JP2015085022W WO2017103985A1 WO 2017103985 A1 WO2017103985 A1 WO 2017103985A1 JP 2015085022 W JP2015085022 W JP 2015085022W WO 2017103985 A1 WO2017103985 A1 WO 2017103985A1
Authority
WO
WIPO (PCT)
Prior art keywords
face part
information
attribute
face
part attribute
Prior art date
Application number
PCT/JP2015/085022
Other languages
English (en)
Japanese (ja)
Inventor
あゆみ 河野
順治 園田
Original Assignee
一般社団法人日本ファッションスタイリスト協会
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 一般社団法人日本ファッションスタイリスト協会 filed Critical 一般社団法人日本ファッションスタイリスト協会
Priority to PCT/JP2015/085022 priority Critical patent/WO2017103985A1/fr
Priority to JP2016543241A priority patent/JP6028188B1/ja
Priority to CN201580084856.4A priority patent/CN108292418B/zh
Publication of WO2017103985A1 publication Critical patent/WO2017103985A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the present invention relates to an information providing apparatus and an information providing information method, and more particularly, systematically classified and accumulated based on measurement data and input data on attributes such as the shape and size of a part of a subject's face.
  • the present invention also relates to an information providing apparatus and an information providing method for providing an information group suitable for a target person from a plurality of information groups such as styling.
  • the present invention uses useful and accurate information (hereinafter referred to as “subject information”) such as styling that matches the subject with respect to the subject using the evaluation result obtained for the face part attribute of the subject. It is an object to provide an information providing apparatus and an information providing method capable of promptly providing at least one.
  • the object includes at least an attribute information acquisition unit, a storage unit, an evaluation unit, a compatible information category identification unit, an information selection unit, and an output unit
  • the attribute information acquisition unit is a target At least one selected face part selected from face parts including eyebrows, eyes, nose, mouth, skin, and hair constituting the person's own face, and the shape / size, color, and
  • the face part attribute including the texture and the measurement result of the face part attribute are acquired, and the storage unit belongs to a plurality of face part information, a plurality of tendencies, a plurality of matching information categories, and the respective matching information categories.
  • a plurality of target person information groups are stored, and each of the face part information includes at least one face part attribute defined in advance for each of the face parts and at least one reference for each face part attribute.
  • Each of the plurality of tendencies is related to the face part attribute and is defined as a straight axis including the face part attribute reference value in each face part attribute, and the at least It is stipulated to classify the plurality of target person information groups into at least two matching information categories with one face part attribute reference value as a boundary, and the evaluation unit is configured to select the selected face part through the attribute information acquisition unit.
  • the category specifying unit calls at least one tendency related to each face part attribute from the storage unit, and sets each inclination based on the comparison result. And extracting and specifying at least one relevant information category associated with the sex, and the information selection unit selects at least one subject information belonging to the identified relevant information category in the subject information group in the storage unit.
  • the output unit is achieved by an information providing device configured to output each selected target person information.
  • the object is also according to another aspect of the present invention, in which at least one selected facial part selected from facial parts including the eyebrows, eyes, nose, mouth, skin and hair constituting the face of the subject.
  • Attribute information acquisition for acquiring at least one face part attribute selected from the shape / size, color, and texture of the face part and a predetermined face part attribute measurement result defined for the face part attribute Receiving the process, the selected face part and the face part attribute measurement result, comparing the face part attribute reference value called from the storage unit based on the former and the latter, and comparing the comparison result with the matching information category extracting part
  • at least one tendency defined as a linear axis is called from the storage unit and the storage unit is associated with each tendency based on the comparison result.
  • a matching information category identifying step for extracting and identifying at least one matching information category, and a target for selecting at least one target information belonging to the specified matching information category from the target information group in the storage unit It is achieved by an information providing method comprising a person information selection step and an output step of outputting each selected target person information.
  • the above object is also achieved by a program for causing a computer to execute the information providing method of the present invention.
  • a subject who uses the apparatus selects at least one face part and face part attribute of the subject, and measures at least one face part attribute for the selected face part.
  • a plurality of matching information categories for classifying a plurality of target person information groups based on a comparison result between the face part attribute measurement result and the corresponding face part attribute reference value by inputting the result to the information providing device It is possible to specify at least one matching information category associated with at least one tendency from among the information, and output target person information belonging to the matching information category. Accordingly, it is possible to quickly provide the target person with appropriate target person information suitable for the target person using a computer.
  • FIG. 2 is a block diagram schematically showing a functional configuration example of an arithmetic control unit shown in FIG. 1. It is a block diagram which shows roughly an example of the memory
  • FIG. 1 is a block diagram schematically showing a hardware configuration example of the present embodiment
  • FIG. 2 is a block diagram schematically showing a functional configuration example of an arithmetic control unit shown in FIG. 1
  • FIG. 3 is shown in FIG. It is a block diagram which shows a memory
  • the information providing apparatus 1 of the present embodiment uses information useful for the target person such as statistically accumulated styling information in advance (hereinafter, this information is referred to as “target person information”, and a plurality of target person information is “targeted”. Information group))), and the shape, size, color, etc.
  • face part attribute value Based on measurement data and input data (hereinafter referred to as “face part attribute value”) about attributes such as texture (hereinafter referred to as “face part attribute value”), individual information unique to the target person is extracted, It is an apparatus that provides at least one subject information suitable for the subject from the subject information group.
  • the information providing apparatus 1 includes an arithmetic control unit 2, a storage unit 3, an input unit 5, an output unit 7, and an internal bus 4.
  • examples of the information providing apparatus 1 include a general-purpose personal computer, a tablet terminal, and a smartphone. These may be stand-alone terminals, or may be client terminals connected to a server via a telecommunication means (not shown) in a client / server system.
  • the storage unit 3 includes a storage unit (not shown) provided on the server side.
  • the internal bus 4 has a function of mutually connecting the above-described units, that is, the arithmetic control unit 2, the storage unit 3, the input unit 5, and the output unit 7.
  • the arithmetic control unit 2 performs various controls on each unit by a CPU or the like.
  • the arithmetic control unit 2 is configured by an arithmetic control device including a CPU, MPU, ROM, and the like in hardware.
  • the arithmetic control unit 2 executes a startup program stored in the ROM after the information providing apparatus of the present invention is turned on, and stores an operating system (OS), various processing drivers, and the present invention stored in the storage unit 3.
  • the program for executing the information providing method and various data are read into the RAM which is the main storage device, and the display information expanded in the RAM is output to the display 10 of the output unit 7 or to the printer 11. Or output. As shown in FIG.
  • the arithmetic control unit 2 in the present embodiment includes functional units such as an attribute information acquisition unit 21 including an image data analysis unit 25, an evaluation unit 22, a matching information category identification unit 23, and an information selection unit 24. It is configured. Each of these functional units will be described later.
  • the program and various data for executing the information providing method of the present invention may be stored in a recording medium such as a CD-R or DVD-R. Accordingly, the recording medium can be stored in the storage unit 3 as necessary.
  • the storage unit 3 temporarily stores or stores programs executed by the arithmetic control unit 2 and various data temporarily or permanently.
  • the storage unit 3 includes a volatile memory such as various RAMs, a magnetic disk such as a hard disk drive (HDD), an optical disk, and a secondary storage device such as a nonvolatile memory.
  • the storage unit 3 includes a reference face shape (size), a plurality of face part information, a plurality of tendencies, a plurality of matching information categories, a plurality of target person information groups, and a plurality of target person information groups.
  • Questionnaire information is stored in advance.
  • FIG. 4 shows an example of a two-dimensional reference face shape. This figure shows the reference face size.
  • the reference face size means a dimension or a product of the dimensions shown in any one of the following (1) to (4) in the reference face shape (see FIG. 4).
  • the face part information includes a face part, at least one face part attribute defined in advance for each of the face parts, and at least one reference value (hereinafter referred to as “face part attribute” defined in advance in each face part attribute). It is composed of a set of “reference value”.
  • FIG. 5 shows an example of face part information stored in the storage unit 3 of the present embodiment. As shown in this figure, the face part includes parts such as face eyes, nose, mouth, eyebrows, skin, hair, and hair. The facial part also includes a combination of at least two of these parts. In the present embodiment, the face part attribute defines three types of face part shape / size, color, and quality (texture).
  • This face part attribute can be additionally defined by the advancement of measurement technology in the future, and is not limited to these three types. Further, the facial part attribute reference value may be changed as appropriate because it may be changed due to future advancement of measurement technology or the like.
  • the face part attribute reference value is mainly quantitatively expressed by a numerical value, but may be expressed qualitatively by a value other than the numerical value.
  • Each face part attribute further includes two attributes in principle as shown in FIG.
  • the attribute is related to “straight-curve” whether linear or curved, and “thin-thick” whether thin or thick. Attributes.
  • the face part is a pupil and the face part attribute is a color, whether the pupil is yellowish or bluish, an attribute relating to “yellow-blue” and whether the skin color is light or dark
  • the attribute relating to “bright-dark” is defined.
  • the facial part is skin and the facial part attribute is texture, the attribute relating to “thin-thick”, whether the skin is thin or thick, and “matte-gloss ( Yes) ”is specified.
  • each attribute includes one attribute.
  • it is an attribute related to “gradation-contrast” indicating whether the color difference between the skin color and the hair is relatively smaller or larger than the facial part attribute reference value
  • the white eye portion and the black eye portion This is an attribute relating to “gentle-bright” indicating whether the saturation is relatively larger or smaller than the reference value (face part attribute reference value).
  • FIG. 5 also shows the relationship between the facial part attribute and the tendency.
  • the facial part attributes are classified into three tendencies. That is, the attributes related to “straight-curve”, “yellow-blue”, “flat-concave” are cool-warm tendencies, and attributes related to “thin-thick”, “small-large”, “thin-dark”. Corresponds to the light-deep tendency, and the attributes relating to “gradation-contrast” and “gentle-bright” correspond to the gradation-contrast tendency, respectively.
  • each tendency is related to the facial part information, and is named by the impression and image received from the two polar attributes in each facial part attribute, and is a linear axis (scale) that serves as an index of these bipolar attributes. Axis).
  • the face part attribute reference value is included in any intermediate position on the straight line.
  • the attributes of each facial part attribute are not limited to these, and further analysis / examination will further discover attributes that have a statistically significant association with the subject information group.
  • the attribute can be replaced with the current attribute or additionally employed.
  • the present invention is not limited to these tendencies, and further, by repeating statistical processing, knowledge about tendencies that are more relevant between the facial part attribute and the target person information group can be obtained. Needless to say, once obtained, the tendency can be newly added or changed.
  • the target person information group associates information recognized by the statistical processing that is highly related to the past general human face part information with each of the tendencies, and more specifically, with respect to each of the polar attributes of the tendencies. It is collected.
  • This target person information group includes styling (product) information (tie, shirt, suit pattern, color, etc.) suitable for the target person, information on the personality of the target person (gentle, energetic, etc.), information on customer service type (early Other information and advice are included (speak, speak, proactively explain, etc.).
  • Other information and advice include, for example, information about tableware; information about interiors such as furniture and interiors (colors of floors and walls, etc.); information about graphic designs such as logo colors; advertisements, promotions, product planning (colors of packages, etc.) ); Information on styling tests.
  • These target person information groups can be arranged one-dimensionally according to the strength of their attributes for each tendency.
  • the subject information group is classified into at least two categories related to the attributes of the two poles with at least one face part attribute reference value as a boundary according to any one tendency.
  • this category is referred to as a “conforming information category”.
  • the subject information group is divided into two matching information categories related to the attributes of both poles with the part attribute reference value as a boundary.
  • the subject information group is linear in one pole, a conforming information category (cool category) for attributes that give a cool impression such as blue, and a curvilinear in the other pole. It can be divided into conformity information categories (warm categories) for attributes that give a warm impression such as yellow.
  • the relevant information category for attributes that give light (lightness) impressions such as bright, small and light, and deep (heavy) such as dark, large and heavy
  • the conformity information category for attributes that give an impression, and for gradient-contrast tendencies
  • it has a gradation (gentle color impression) with a gradation such as gentle, familiarity, and matte (matte).
  • gradation category grade category
  • contrast information category contrast category
  • the relevant information category that is greater than or equal to the maximum face part attribute reference value, the second face part attribute reference value or more, and less than the maximum face part attribute reference value
  • each tendency has a single face part attribute reference value included therein.
  • FIG. 6 shows the relationship between the facial part attribute and tendency and the matching information category.
  • the cool-warm tendency is plotted on the horizontal axis
  • the light-deep trend is plotted on the vertical axis.
  • An example of the orthogonal coordinate system is shown.
  • a virtual axis of contrast-gradient tendency is provided in a 45 ° upward and downward right direction.
  • each face part attribute reference value is the origin and intersects at this origin.
  • the subject information group is two-dimensionally arranged according to the strength of the attribute from the origin, for example.
  • this orthogonal coordinate system is referred to as a “styling map”, and in particular, the first quadrant (I) is “bright taste”, the second quadrant (II) is “aqua taste”, and the third quadrant (III) is “ The “crystal taste” and the fourth quadrant (IV) will be called “artist”.
  • the first quadrant (bright taste) and the fourth quadrant (artist) become the warm category, and the second quadrant (aqua taste) and third quadrant (crystal taste) due to the cool-warm tendency on the horizontal axis. Is a cool category.
  • the first quadrant (bright taste) and the second quadrant (aqua taste) are in the light category
  • the third quadrant (crystal taste) and the fourth quadrant (artist) are in the cool category. It becomes.
  • the first quadrant (bright taste) and the third quadrant (crystal taste), which include a 45-degree diagonally upward virtual axis passing through the origin, are gradation categories due to the gradation-contrast tendency.
  • the second quadrant (aqua taste) and the fourth quadrant (artist), which include a hypothetical axis that is 45 degrees diagonally downward and passes through the origin, are the contrast category.
  • this styling map is suitable for explaining the relationship between the respective tendencies and the matching information category in the present embodiment, the relationship between the propensity and the matching information category has been described. Instead of using such a styling map, it can also be shown using other methods and structures, and is not limited to using a styling map. .
  • FIG. 7 summarizes the relationship between the conformity information categories classified by the three tendencies shown in FIG. 5 and the first to fourth quadrants shown in FIG. 6 in a table format.
  • each matching information category of each tendency is composed of a combination of two quadrants among the first to fourth quadrants.
  • each of the first to fourth quadrants is a part of three of the six relevant information categories. That is, the “bright taste” in the first quadrant is a part of each of the matching information categories of light, warm and contrast. Further, the “aqua taste” in the second quadrant is a part of each matching information category of light, cool and gradation. Furthermore, the “Crystal taste” in the third quadrant is part of the cool, deep and contrast fit information categories. Furthermore, the “artist” in the fourth quadrant is a part of the deep, warm, and gradation matching information categories.
  • FIG. 8 shows the relationship between each quadrant and the subject information group.
  • information on the customer service type, product (styling) and personality of the target person is posted as the target person information group, but is not limited thereto.
  • the target person information group is classified by the matching information category.
  • the relevant information category is a bright taste
  • “product with a casual feeling” as a product, “bright and cheerful” as a personality, and “speak early” as a customer service type are representatively cited.
  • the conformity information category is aqua taste, “good touch” as a product, “gentle and gentle” as a personality, and “listen to talk” as a customer service type are typically cited.
  • the conformity information category is a crystal taste, it is typically cited that the product is “shiny”, the personality is “energetic and quick”, and the customer service type is “quickly respond with quick action”. . Furthermore, when the conformity information category is “Artist”, “Deep things” as a product, “Peaceful, think carefully” as a personality, and “Provide instructions according to the procedure” as a customer service type are typically cited.
  • the storage unit 3 in the present embodiment can further store questionnaire information including a set of a plurality of questions for knowing the personality and behavior pattern of the target person and a plurality of answer sentences for the questions. .
  • An example of this questionnaire information is shown in FIG. As shown in this figure, a plurality of answer sentences for each question are provided corresponding to “bright taste”, “aqua taste”, “crystal taste”, and “art taste”, respectively.
  • the content of the questionnaire information is the question “personality (your personality)", for example, “I like helping people calmly and gently”, “energetic decision making and action is fast, focus on purpose and results”, Answers such as "I am cheerful and cheerful and value my completion and sensibility" and "I am calm and think carefully and act” are prepared.
  • the questionnaire information is configured so that the target person selects the items that are considered to correspond to him / herself.
  • the input unit 5 in the present embodiment includes an input terminal 9 and a device driver 6. Information input from the input terminal 9 or the device driver 6 is stored in the storage unit 3 via the internal bus 12.
  • the input terminal 9 inputs the selection result of the face part and the face part attribute by the operator (usually, but not limited to the target person), and answers to questionnaire information to be described later It is a device for selecting (multiple choices).
  • the subject selects the face part and the face part attribute with reference to the combination of the face part attributes shown in FIG. By combining at least two in this way, it is possible to further narrow down the matching information category so as to match the target person as will be described later, and to provide more appropriate target person information to the target person. This is because it can.
  • the input unit 5 mainly functions as the attribute information acquisition unit 21 (see FIG. 2).
  • the input terminal 9 is used mainly to input a selection instruction and necessary data.
  • Examples of the input terminal 9 include a mouse, a pointing device, a keyboard, and a touch panel.
  • the input terminal 9 can be used for a selection instruction or data input using a mouse cursor or a point on the selection screen displayed on the display 10 of the output unit 7. Can be used to input various data.
  • a selection instruction or data can be input using the operation input button on the display 10 or a display screen prepared separately.
  • the device driver 6 receives various data such as a facial part attribute measurement result, face image data, and 3D scanning data, which will be described later, from an external device such as a measurement device, a digital camera, a scanner device, or a 3D scanner device connected via the device driver 6. It is a driver to receive.
  • the driver 6 reads, for example, face image data photographed by a digital camera (not shown), 3D scanning data from a 3D scanner, measurement results for a predetermined attribute, and the like based on an instruction signal from the arithmetic control unit 2. Execute processing.
  • the output unit 7 in this embodiment includes a display 10 and a printer 11.
  • the output unit 7 displays various formats and data on the display 10 based on the instruction signal from the arithmetic control unit 2 or causes the printer 11 to print.
  • Various formats include a selection screen for the target person to select the face part and face part attribute, a target person information group selected by the information providing apparatus of the present embodiment, and a plurality of targets corresponding to the information group.
  • An output screen for outputting person information is included.
  • Examples of the display 10 include a liquid crystal monitor and a projector.
  • the arithmetic control unit 2 processes the program and various data for executing the information providing method of the present invention, thereby obtaining the attribute information acquisition unit 21, the evaluation unit 22, the conforming information category. Processing of each functional unit of the specifying unit 23 and the information selection unit 24 is executed. These processes are performed according to various command signals from the arithmetic control unit 2.
  • the units 21 to 24 are classified by name according to function for convenience of explanation, and do not limit the software configuration.
  • the present invention includes a form in which a part of the processing by these is executed by hardware mounted on the information providing apparatus of the present embodiment.
  • the attribute information acquisition unit 21 receives the selection result of the selected face part and face part attribute by the operation of the input device of the input unit 5 by the target person, or the external device (keyboard, touch panel, scanner, etc.) (not shown) via the device driver 6.
  • the attribute information acquisition unit 21 selects each selected face part. Is acquired as a facial part attribute measurement result and sent to the evaluation unit.
  • the face part attribute measurement result includes the face size Lm, Mm, Nm or Mm ⁇ Nm obtained in the same manner as the reference face size for the subject (for convenience of explanation, Lm corresponding to each of the reference face sizes) , Mm, Nm, etc., and m).
  • the attribute information acquisition unit 21 uses the color value in each selected face part as a face part attribute measurement result. Obtain it and send it to the evaluation unit 22.
  • the attribute information acquisition unit 21 calculates the color value measured for each of the white eye part and the black eye part in the selected face part. Obtained as a result and sent to the evaluation unit 22.
  • the attribute information acquisition unit 21 acquires a measurement value of the oil amount in the selected face part as a face part attribute measurement result and evaluates the part 22.
  • the skin texture evaluation method is not limited to the amount of oil in the skin, and other attributes such as the amount of skin moisture and the ratio between the amount of oil and the amount of moisture can also be employed.
  • the attribute information acquisition unit 21 acquires and evaluates the measured value of the diameter of the hair in the selected face part as a face part attribute measurement result. Send to part 22.
  • the attribute information acquisition unit 21 can also acquire face image data and 3D scanning data. Therefore, the attribute information acquisition unit 21 can include an image data analysis unit 25 to analyze the acquired face image data and 3D scanning data.
  • the face image data or the like may be the subject's own or a person other than the subject.
  • the image data analyzing unit 25 selects the selected face part from the acquired face image data by a conventionally known method. After detection, a predetermined attribute such as a dimension is measured, and the result is set as a facial part attribute measurement result.
  • the information providing apparatus 1 of the present invention provides information about the target person in the former case, and provides information about a person other than the target person in the latter case. become.
  • an image The data analysis unit 25 obtains a predetermined dimension related to the shape and size of the selected face part for the face image data sent from the attribute information acquisition part 21 and uses it as a face part attribute measurement result.
  • the image data analysis unit 25 measures the color value of the skin area excluding the hair part from the face image data to measure the face part attribute. The measurement result. Furthermore, when the selected face part is hair or pupil (black eye part or pupil) and the face part attribute is color, the image data analysis unit 25 measures the color value of the hair region or pupil from the face image data. Color the face part attribute measurement result. Furthermore, when the selected face part is an eye and the face part attribute is a color, the image data analysis unit 25 measures color values for each of the white eye part and the black eye part in the selected face part from the face image data. And the measurement result of the facial part attribute. Color measurement can be performed by a known color measurement method.
  • the colorimetric position in the hair region and the eyes (including both the white eye part and the black eye part) at the time of color measurement can be set by a known method so as to represent each of these areas. As long as a representative value can be obtained, the color measurement may be performed for only one point, or multiple points may be measured to obtain an average value thereof.
  • the image data analysis unit 25 uses the hair of the selected face part in the face image data (in this case, 3D scanning data is preferable). The diameter is measured, and this is used as the facial part attribute measurement result. Furthermore, in this case, the image data analysis unit 25 acquires a cross-sectional shape of the hair using a known image analysis method, and uses this as a facial part attribute measurement result. The cross-sectional shape of the hair is used because this shape is a factor in the formation of straight hairs and curly hairs, and whether the hair is straight or not has a great influence on the texture of the hair. These face part attribute measurement results are sent to the evaluation unit 22.
  • the evaluation unit 22 compares the face part attribute measurement result sent from the attribute information acquisition unit 21 with a face part attribute reference value called from the storage unit 3 to be described later, and compares the comparison result with the matching information category specifying unit 23 ( (To be described later). For example, when the selected face part is at least one of eyebrows, eyes, nose, and mouth, and the face part attribute is shape / size, the evaluation unit 22 sends each dimension ( The measured value in the image data analyzing unit 25 is also included) and the reference value of each dimension called from the storage unit 3 is compared. At this time, the evaluation unit 22 calls a reference face size L, M, N, or M ⁇ N (see FIG.
  • the face part attribute “shape / size” has two attributes of “straight (target) -curve (target)” and “thin-thick” (see FIG. 5).
  • the former attribute (“straight (linear) —curve (curve)”)
  • the facial part attribute measurement acquired by the evaluation unit 22 is performed.
  • the result (measured value) is a radius A (unit: cm) of an arc approximated by the upper edge of the eyebrows, and the face part attribute reference value called from the storage unit 3 is set to 5 cm.
  • the measured value of the radius A according to the normal method for the subject is, for example, 10 cm larger than this facial part attribute reference value after being multiplied by a correction value (M ⁇ N / (Mm ⁇ Nm); the same applies hereinafter) Since the shape of the eye of the subject is a straight landscape, it becomes “straight”, and in the case of 3 cm smaller than the face part attribute reference value, the height and width are narrow, so the “curve” It becomes the target. That is, in this case, the facial part attribute measurement result for the subject is the “straight-curve” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map (orthogonal coordinate system) shown in FIG. 6) on the face It belongs to the “straight line” side or the “curve” side with the site attribute reference value 5 cm as the boundary (origin).
  • the facial part attribute measurement results (measurement values) acquired by the evaluation unit 22 are as shown in FIG.
  • This is the ratio B (unit:%) of the width of the eyebrow on the eye side to the product M ⁇ N (see Fig. 4) obtained by multiplying the distance between the strokes by the vertical distance, and the reference value for the facial part attribute is set to 0.03% Has been.
  • the measured value of the width B of the subject by the usual method is 0.02% which is smaller than the reference value 0.03% after being multiplied by the correction value (M ⁇ N / (Mm ⁇ Nm)) as described above. Belongs to the “thin” side when it is 0.05% larger than the reference value of 0.03%.
  • the facial part attribute measurement result for the subject is the face part attribute reference value 0 on the “thin-thick” line (corresponding to the vertical axis (light-deep tendency) of the styling map shown in FIG. 6). It belongs to the “thin” side or “thick” side with 0.03% as the boundary (origin).
  • the face part attribute “shape / size” has two attributes of “straight (linear) or curved (curve)” and “small-large” (see FIG. 5).
  • the former attribute (“straight (target) —curve (target)”)
  • the facial part attribute measurement result is the product M ⁇ N (see FIG. 11A). 4)
  • the reference value of the facial part attribute is set to 0.05%.
  • the measured value of the above-mentioned ratio C of the subject by the usual method is 0.03% smaller than the reference value 0.05% after being multiplied by the correction value (M ⁇ N / (Mm ⁇ Nm))
  • the shape of the eye is horizontally long, it is like a “straight line”, and in the case of a large 0.08%, there is a height, so it is like a “curve”. That is, the facial part attribute measurement result for the subject in this case is the face part attribute reference value 0 on the “straight-curve” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6). .05% as a boundary (origin) and belonging to the “straight line” side or “curve” side.
  • the facial part attribute measurement result corresponds to the product M ⁇ N (see FIG. 4) as shown in FIG. It is the ratio D (unit:%) of the lateral width of the eye, and this facial part attribute reference value is set to 0.2%.
  • the measured value of the eye width ratio D of the subject is 0.18% smaller than the reference value (0.2%) after being multiplied by the correction value (M ⁇ N / (Mm ⁇ Nm)) Belongs to the “small” side, and in the case of a large 0.25%, it belongs to the “large” side.
  • the facial part attribute measurement result for the subject in this case is the face part attribute reference value 0 on the “small-large” line (corresponding to the vertical axis (light-deep tendency) of the styling map shown in FIG. 6). .2% of the border (origin) belongs to the “small” side or the “large” side.
  • the face part attribute “shape / size” has two attributes “straight (target) —curve (target)” and “small—large” as in the case of the eyes ( (See FIG. 5).
  • the facial part attribute measurement result is an arc that approximates the swelling of the nose as shown in FIG. Radius E (unit: cm), and the face part attribute reference value is set to 1 cm.
  • the facial part attribute measurement result for the subject in this case is the “curve-straight line” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6), and the facial part attribute reference value 1 cm. It belongs to either the “straight line” side or the “curve” side with the border (origin).
  • the facial part attribute measurement result is the product M ⁇ N (see FIG. 4) as shown in FIG. It is expressed by a width ratio F (unit:%) in the lateral direction of the nose, and its reference value is set to 0.2%.
  • the measured value of the ratio F of the subject is 0.18%, which is smaller than the facial part attribute reference value 0.2%, after being multiplied by the correction value (M ⁇ N / (Mm ⁇ Nm)), “ If it is 0.25% on the “small” side, it belongs to the “large” side.
  • the facial part attribute measurement result for the subject in this case is the face part attribute reference value 0 on the “small-large” line (corresponding to the vertical axis (light-deep tendency) of the styling map shown in FIG. 6). It belongs to the “small” side or the “large” side with a boundary (origin) of 2%.
  • the face part attribute “shape / size” has two attributes, “straight (target) —curve (target)” and “small—large”, as in the case of the eyes and nose. (See FIG. 5).
  • the facial part attribute measurement result is the product M ⁇ N (see FIG. 4) as shown in FIG.
  • the height ratio G of the mouth in the vertical direction (unit:%), and 0.1% is the reference value of the face part attribute.
  • the facial part attribute measurement result for the subject in this case is the face part attribute reference value 0 on the “curve-straight line” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6). It belongs to the “straight line” side or “curve” side with 1% as the boundary (origin).
  • the facial part attribute measurement result (measured value) is the product M ⁇ N (see FIG. 4), as shown in FIG. ) Of the width of the mouth in the horizontal direction (unit:%), and the reference value is set to 0.25%.
  • the measured value of the ratio H of the subject is 0.23% which is smaller than 0.25% of the facial part attribute reference value after being multiplied by the correction value (M ⁇ N / (Mm ⁇ Nm)), “ If it is 0.3%, it belongs to the “large” side.
  • the facial part attribute measurement result for the subject in this case is the face part attribute reference value 0 on the “small-large” line (corresponding to the horizontal axis (light-deep tendency) of the styling map shown in FIG. 6). It belongs to the “small” side or “large” side with 25% as the boundary (origin).
  • the evaluation unit 22 compares the face part attribute measurement result with the reference value of the color value called from the storage unit 3 to be described later.
  • the face part attributes include two attributes of “yellow (system) -blue (system)” and “bright-dark” (see FIG. 5).
  • the face part attribute measurement result is a color value obtained by measuring a part representing the skin color using a conventional method
  • the reference value is a CIE Lab color value. Is shown. In the following, an example in which CIE Lab color values are used as color values will be described. However, the present invention is not limited to this, and other color values such as Hunter Lab, RGB, CMYK, XYZ, or Lch can also be used. .
  • the measurement result of the facial part attribute for the subject in this case is the face part attribute reference value on the “yellow-blue” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6).
  • the boundary (origin) belongs to the “yellow” side or the “blue” side.
  • the evaluation unit 22 when the selected face part is the hair or the pupil (pupil) and the face part attribute is color, the evaluation unit 22 also calls the reference value of the color value called from the storage unit 3 (to be described later) Contrast with face part attribute reference value).
  • the face part attribute includes two attributes of “yellow (system) -blue (system)” and “bright-dark” (see FIG. 5).
  • the measurement result of the facial part attribute for the subject in this case is the face part attribute reference value on the “yellow-blue” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6).
  • the boundary (origin) belongs to the “yellow” side or the “blue” side.
  • the evaluation unit 22 obtains a difference in color value between the white-eye part and the black-eye part and calls it from the storage part. Also, it is compared with the reference value of the difference in color value.
  • the face part attribute reference value saturated
  • the face part attribute reference value saturated
  • the face part attribute measurement result includes a first 45-degree diagonal virtual axis (contrast tendency) that passes through the origin in the orthogonal coordinate system shown in FIG. It belongs to the quadrant and the third quadrant. Further, when the evaluation is gentle, it belongs to the second quadrant and the fourth quadrant that include a 45-degree diagonal virtual axis (gradient tendency) passing through the origin in the orthogonal coordinate system.
  • the evaluation unit 22 obtains the difference in color value between the skin part and the hair part, And the reference value of the difference between the color values called from the storage unit.
  • the facial part attribute reference value saturated
  • the facial part attribute reference value saturated
  • the face part attribute “texture” has two attributes of “thin-thick” and “matt (matte) -gloss (present)” (see FIG. 5).
  • the evaluation unit 22 uses the measured thickness value of the epidermis in the skin part. Contrast with the reference value of the thickness of the skin called from the storage unit 3.
  • the evaluation unit 22 acquires the thickness of the epidermis (unit: mm), and the epidermis thickness reference value (face part attribute reference value) called from the storage unit 3 is set to 0.2 mm.
  • the thickness of the epidermis measured for the subject is 0.1 mm, it is evaluated as “thin” because it is smaller than the reference value 0.2 mm.
  • the reference value 0 Since it is larger than 2 mm, it is evaluated as “thick”.
  • the facial part attribute value is on the “thin-thick” line (corresponding to the vertical axis (light-deep tendency) of the styling map shown in FIG. 6), and the boundary (origin) of the facial part attribute value is 0.2 mm. Thus, it belongs to the “thin” side or the “thick” side.
  • the evaluation unit 22 stores the skin oil amount for the subject. Contrast with the skin oil amount reference value called from 3. In this case, the reference value of the skin oil amount (facial part attribute reference value) is 12%. If the subject's measured skin oil content (facial part attribute measurement result) is 5%, it is lower than the standard value of 12%, so it is a mat, and conversely, if the measured skin oil content is 20%, the standard Since the value is higher than 12%, it is glossy.
  • the face part attribute measurement result in this case includes a second 45-degree hypothetical axis (gradient tendency) that passes through the origin in the orthogonal coordinate system of FIG. It belongs to the quadrant and the fourth quadrant. Further, when the evaluation is glossy, it belongs to the first quadrant and the third quadrant that include a 45-degree diagonal virtual axis (contrast tendency) that goes upward through the origin in the orthogonal coordinate system shown in FIG.
  • the face part attribute “texture” has two attributes “flat-uneven” and “thin-thick” (see FIG. 5).
  • the evaluation unit 22 obtains the cross-sectional shape of the hair as the facial part attribute measurement result from the storage unit 3 and the reference value of the cross-sectional shape. Contrast.
  • the cross-sectional shape (face part attribute reference value) called from the storage unit 3 is set to an ellipse.
  • the facial part attribute value is on the “flat-uneven” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6), and the facial part attribute reference value ellipse is the boundary (origin). If it is a more or less perfect circle, it belongs to the “flat” side, and if it is approximately a triangle, it belongs to the “uneven” side.
  • the evaluation unit 22 compares the hair diameter as the facial part attribute measurement result with the reference value of the hair diameter called from the storage unit 3.
  • the diameter reference value face part attribute reference value
  • the measured value of the subject's diameter is 0.06 mm, it is evaluated as “thin” because it is smaller than the reference value 0.08 mm, and conversely the diameter measured value is 0.1 mm. For example, since it is larger than the reference value 0.08 mm, it is evaluated as “thick”.
  • the facial part attribute measurement result is the face part attribute reference value on the “thin-thick” line (corresponding to the vertical axis (light-deep tendency) of the styling map shown in FIG. 6) of the orthogonal coordinate system of FIG. It belongs to the “thin” side or the “thick” side with 0.08 mm as the boundary (origin).
  • the evaluation unit 22 sets the selected face part and the face part attribute and the comparison result of the face part attribute measurement result to the corresponding information category.
  • the data is sent to the specifying unit 23.
  • the matching information category specifying unit 23 calls at least one predetermined tendency from each of the selected face part and the face part attribute selection results sent from the evaluation unit 22, and based on the comparison result of the face part attribute measurement results It is configured to identify at least one matching information category associated with each tendency and send the identification result to the information selection unit 24.
  • the matching information category specifying unit 23 calls a tendency corresponding to the face part attribute from the storage unit 3. Then, based on the facial part attribute measurement result, one of the two matching information categories classified by the tendency is specified, and the subject information belonging to the matching information category specified to the information selection unit 24 Issue a command to select a group.
  • the matching information category specifying unit 23 calls the cool-warm tendency from the storage unit 3, and on the styling map of FIG. 6, the second quadrant (aqua taste) and the third quadrant partitioned by the cool-warm tendency on the horizontal axis. (Crystal taste) is specified as the relevant information category. If the face part is skin, the face part attribute is the color (light and dark), and the evaluation result in the evaluation unit 22 is “bright”, the matching information category specifying unit 23 writes the light- Deep Tendency On the styling map of FIG.
  • the first quadrant (bright taste) and the second quadrant (aqua taste) partitioned by the light-deep tendency on the vertical axis are specified as the matching information category.
  • the face part attribute is the color (mild-bright)
  • the evaluation result in the evaluation unit 22 is “gentle”
  • the matching information category specifying unit 23 stores the storage unit
  • the gradation-contrast tendency is called from 3
  • the conformity information category specifying unit 23 causes at least two tendencies of the three tendencies.
  • the conformance information category is to be specified for each, the conformity information categories are superimposed on the styling map shown in FIG. 6 and the corresponding conformance information categories are identified to narrow down the category. Can be implemented. As a result, it becomes possible to provide a target information group that is more accurate (applicable to the target person) by narrowing down than those included in the compatible information category including any two quadrants in the case of one tendency. .
  • A When selecting two sets of attributes of one face part attribute for one selected face part For example, in FIG.
  • the selected face part is skin and the face part attribute “color” is 2
  • the matching information category specifying unit 23 is the first quadrant (bright taste) and the fourth quadrant (the former matching information category).
  • the first quadrant (bright taste) and the second quadrant (aqua taste), which are the latter category, are overlapped to narrow down to the first quadrant (bright taste) in which both overlap, and are specified as the matching information category.
  • the selected face part is skin and the face part attribute “color” is 2
  • the face part attribute “texture” attribute “matte” is further selected in addition to a set of attributes.
  • the matching information category specifying unit 23 is the first category (bright taste) and the second quadrant that are the former categories.
  • each face part attribute defined for a plurality of selected face parts For example, one selected face part is an eyebrow, and the face part attribute “shape / size” This is a case where the other selected face part is skin and the face part attribute is the color (light and dark).
  • the conformity information category specifying unit 23 performs the second quadrant (aqua taste) and the third quadrant (crystal) of the former conformance information category.
  • the first quadrant (bright taste) and the second quadrant (aqua taste) which are the latter category, are overlapped to specify the overlapping second quadrant (aqua taste).
  • the matching information category specifying unit 23 sends the matching information category specified here to the next information selection unit 24.
  • the matching information category identification unit 23 sends the plurality of identified matching information categories to the information selection unit 24 in that order, so that the most suitable target information (group), and the next target information ( Group), ..., etc., and has an advantage that can be provided to the subject.
  • the matching information category is specified as follows.
  • cool-warm propensity horizontal axis of styling map
  • light-deep propensity vertical axis of styling map
  • the relevant information category is specified by ranking according to the number of plots in the unit of the division, and this specification The results are sent to the information selection unit 24 in this order.
  • the section unit In order to identify the matching information category by ranking according to the number of plots, the identified result is sent to the information selection unit 24 in the ranking.
  • the information selection unit 24 selects at least one target person information belonging to the relevant information category specified based on the specification result from the relevant information category specification unit 23 from the target person information group in the storage unit 3 and outputs it.
  • the unit 25 is configured to output each selected target person information.
  • the image is displayed on the display and printed by the printer.
  • the output method is not limited to these, and another method may be used.
  • FIG. 14 is a flowchart showing the overall procedure of the information providing method using the information providing apparatus shown in FIG. 1, and FIG. 15 is a flowchart showing the procedure of measuring the facial part attribute value when the facial part attribute has a shape and size.
  • FIG. 16 is a flowchart showing a procedure for measuring a facial part attribute value when the facial part attribute is a color.
  • the subject first performs a gender input operation, so that the information providing apparatus 1 of the present invention acquires the gender from the input terminal 9 (step S1).
  • the calculation control unit 2 causes the display 10 to display a list of each face part and the like, and the subject uses the input terminal 9 to select at least one face part (eyes, nose, mouth, (Skin, hair) is selected (step S2). Thereby, the selected face part is determined.
  • the information providing apparatus 1 uses the function of the arithmetic control unit 2 to display the face part attributes (shape / size, color, quality (texture)) on the display 10 based on FIG. ) And the attributes shown in FIG. 5 are displayed so as to be selectable, and the target person selects a face part attribute from the attributes (step S3).
  • the information providing apparatus 1 of the present invention acquires the face part attribute measurement result by any one of the following methods (1) and (2) (step S4). Thereafter, the attribute information acquisition unit 21 sends the selection result of the acquired selected face part and face part attribute and the face part attribute measurement result to the evaluation unit 22.
  • the attribute information acquisition unit 21 first acquires face image data (step S21). Specifically, the arithmetic control unit 2 acquires a front image of the subject's face and, if necessary, face image data obtained by capturing a face image from a predetermined angle using a digital camera (not shown) or a 3D scanner device. The face image data is stored in the storage unit 3 via the device driver 6. The face image data only needs to be acquired and processed as digital data. The face image data is not limited to those captured by a digital camera or a 3D scanner device. It may be a thing.
  • the face contour and the hair portion are recognized from the face image data using a known recognition technique, and the face area is extracted (step S22).
  • the face area is defined as an area excluding the hair portion of the face.
  • the reference face size L of the reference face shape (see FIG. 4) is acquired from the storage unit 3, and the face size Lm of the face area is compared with the reference face size L acquired from the storage unit 3 for correction. A value is obtained, and size adjustment is performed by enlarging / reducing the face image data by this correction value (step S23).
  • step S24 based on the size-matched face image data, at least one selected face part is extracted (step S24) according to the selection by the subject (see step S2 in FIG. 14), and each selected face part is defined.
  • Each measured dimension (facial part attribute measurement result) is obtained from the face image data, and the face part attribute measurement result is obtained (step S25).
  • a face part attribute measurement result for another face part attribute from the input terminal 5 such as a touch panel. For example, when the face part is an eyebrow and the face part attribute is “straight-curve” in the “shape / size”, the radius of the arc of the eyebrow is measured as described above.
  • the flow of 3D scanning data is almost the same as that of the above-described 2D face image data.
  • the steps 22 and 23 may be performed in the reverse order. That is, it is possible to acquire the face part and face part attribute according to the selection result of the subject, obtain the face part attribute measurement result, and then multiply the measurement result by the correction value to perform size matching.
  • the attribute information acquisition unit 21 acquires the face image data and sends it to the image data analysis unit 25 (step S31).
  • the image data analysis unit 25 recognizes the outline of the face and the hair from the sent face image data, and extracts a face area (step S32).
  • the selected face portion is the color of the hair
  • the hair region may be extracted by recognizing the outline of the face and the hair from the face image data.
  • a color value measurement point is selected from the face area by an appropriate method (step S33).
  • the color value measurement point may be selected at one place or a plurality of places as long as a representative result is obtained. Then, the color value at the color value measurement point is measured (step S34). The measurement method is performed by a known method. Subsequently, the measured color measurement results (the result of averaging the color measurement results at each store when a plurality of measurement points are used) are stored in the storage unit 3 as face part attribute values (step S35). The color value can be obtained from 3D scanning data by the same method as described above.
  • step S5 it is confirmed by the input operation of the subject whether or not another face part is to be acquired.
  • the process returns to step S2 again, and as a result, a plurality of face parts can be selected. If another face part is not acquired, the process proceeds to step S6.
  • step S6 it is determined whether or not to conduct a questionnaire based on the selection operation by the input of the target person (step S6).
  • the attribute information acquisition unit 21 acquires a plurality of answer data of the questionnaire by the input to the input terminal 9 by the target person (step S7).
  • multiple items of questionnaire information may be displayed at a time, and the target person may select each answer sentence at one time. Answer sentences in a set) may be divided into multiple times to select answer sentences.
  • the attribute information acquisition unit 21 stores the acquired answer data in the storage unit 3.
  • the evaluation unit 22 performs the following evaluation process (step S8). That is, the selected face part, the face part attribute selection result, the face part attribute measurement result, and the face part attribute reference value are called from the storage unit 3, respectively, and the face part attribute measurement result and the face part attribute reference value are compared. In this way, one or more facial part attribute evaluation results are obtained.
  • the response data is called from the storage unit 3, and the response data is allocated to each of the four quadrants of bright taste, aqua taste, crystal taste, and artist. The number of data for each quadrant (category).
  • the evaluation unit 22 sends the comparison results and the aggregation results to the matching information category specifying unit 23.
  • the matching information category specifying unit 23 when there is one face part attribute selected from the evaluation part 22, at least one face part attribute reference value is set as a boundary due to the tendency defined for the face part attribute.
  • One of the two categories related to the attributes on both sides (the matching information category consisting of two of the four quadrants in the styling map shown in FIG. 6) is specified. Also, when two face part attributes defined for one face part are selected by the subject, or at least two belonging to different tendencies are selected from the face part attributes defined for a plurality of face parts. If there is a match, the matching information category specifying unit 23 superimposes the matching information categories by overlapping the matching information categories on the style map shown in FIG. 6 based on the tendency for each face part attribute and the comparison result in the evaluation unit 22.
  • step S9 It is specified as a high matching information category (step S9).
  • the matching information category with the largest number of duplicates includes useful information suitable for the subject.
  • the conformity information category identification unit 23 aggregates the aggregation results for the questionnaire for each quadrant on the styling map in the identification result, and the conformance that is highly compatible with the target in the order of the number of duplicates. It is specified as an information category (step S9).
  • the information selection unit 24 selects target person information from the matching information category (step S10), and outputs the selected target person information to the output unit 7 (step S11).
  • the read target person information group is sent to the output unit 7 and can be displayed on the display 10 or printed by the printer 12.
  • the information providing apparatus and the information providing method according to the present invention include a plurality of subjects (including face part attribute measurement results and face image data providers) constituting their faces as described above.
  • Useful styling (product) information (tie, shirt, suit pattern and color) suitable for the subject based on at least one face part attribute measurement result for at least one selected face part selected from , Cosmetics for makeup, etc.), information on the personality of the target person and customer service type, and other information and advice.
  • Other information and advice include, for example, information about tableware; information about interiors such as furniture and interiors (colors of floors and walls, etc.); information about graphic designs such as logo colors; advertisements, promotions, product planning (colors of packages, etc.) ); Information on styling tests.
  • the information providing apparatus and the information providing method of the present invention when an information group showing statistically high relevance to the face part and the face part attribute is obtained, the information group is classified for each relevant information category.
  • the information can be classified and included in the target person information group and used for subsequent information provision to the target person.
  • the face part attribute when a face part attribute that is statistically highly related to the target person information group stored in the storage unit in the information providing apparatus is confirmed, the face part attribute is set to a trend or a matching information category.
  • the information can be additionally stored in the facial part information of the storage unit in the information providing apparatus of the present invention in association with the information, and can be utilized for subsequent information provision to the subject.
  • a component can be deform
  • various inventions can be formed by appropriately combining a plurality of components disclosed in the embodiment. For example, some constituent elements can be removed from all the constituent elements shown in the embodiments, or constituent elements over different embodiments can be appropriately combined.

Abstract

[Problème] Produire un dispositif de fourniture d'informations et un procédé de fourniture d'informations qui, à l'aide d'un résultat d'évaluation qui est obtenu pour un attribut de région faciale d'un sujet, permet de fournir rapidement pour le sujet au moins une instance d'informations (appelée « informations sur le sujet » dans ce qui suit) qui est à la fois utile et approprié pour une coiffure qui correspond au sujet. [Solution] Ce dispositif de fourniture d'informations et ce procédé de fourniture d'informations : ayant sélectionné au moins une région faciale qui configure le visage du sujet, comparent au moins un résultat de mesure d'attribut de région faciale qui a été acquis pour chacun des attributs de région faciale à une valeur de référence de celui-ci ; à partir du résultat de la comparaison et d'au moins un souhait qui a été défini pour l'attribut de région faciale, ils identifient une catégorie d'informations de mise en correspondance à l'aide dudit au moins un souhait ; ils sélectionnent un groupe d'informations de sujet, parmi une pluralité de groupes d'informations de sujet, qui est inclus dans la catégorie d'informations de mise en correspondance ; et ils délivrent celui-ci.
PCT/JP2015/085022 2015-12-15 2015-12-15 Dispositif de fourniture d'informations et procédé de fourniture d'informations WO2017103985A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/JP2015/085022 WO2017103985A1 (fr) 2015-12-15 2015-12-15 Dispositif de fourniture d'informations et procédé de fourniture d'informations
JP2016543241A JP6028188B1 (ja) 2015-12-15 2015-12-15 情報提供装置及び情報提供方法
CN201580084856.4A CN108292418B (zh) 2015-12-15 2015-12-15 信息提供装置及信息提供方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/085022 WO2017103985A1 (fr) 2015-12-15 2015-12-15 Dispositif de fourniture d'informations et procédé de fourniture d'informations

Publications (1)

Publication Number Publication Date
WO2017103985A1 true WO2017103985A1 (fr) 2017-06-22

Family

ID=57326660

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/085022 WO2017103985A1 (fr) 2015-12-15 2015-12-15 Dispositif de fourniture d'informations et procédé de fourniture d'informations

Country Status (3)

Country Link
JP (1) JP6028188B1 (fr)
CN (1) CN108292418B (fr)
WO (1) WO2017103985A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112749634A (zh) * 2020-12-28 2021-05-04 广州星际悦动股份有限公司 基于美容设备的控制方法、装置以及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11265443A (ja) * 1997-12-15 1999-09-28 Kao Corp 印象の評価方法及び装置
JP2001346627A (ja) * 2000-06-07 2001-12-18 Kao Corp 化粧アドバイスシステム
JP2002132916A (ja) * 2000-10-26 2002-05-10 Kao Corp メイクアップアドバイスの提供方法
JP2006024203A (ja) * 2004-06-10 2006-01-26 Miyuki Iino コーディネート支援システム
JP2010140100A (ja) * 2008-12-09 2010-06-24 Yurakusha:Kk 顔パターン分析システム
JP2013501292A (ja) * 2009-08-04 2013-01-10 ヴェサリス 基準画像に対して対象画像を補正する画像処理方法及びその画像処理装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5347549B2 (ja) * 2009-02-13 2013-11-20 ソニー株式会社 情報処理装置および情報処理方法
JP5792985B2 (ja) * 2011-04-20 2015-10-14 キヤノン株式会社 情報処理装置、情報処理装置の制御方法、およびプログラム
JP5319829B1 (ja) * 2012-07-31 2013-10-16 楽天株式会社 情報処理装置、情報処理方法及び情報処理プログラム

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11265443A (ja) * 1997-12-15 1999-09-28 Kao Corp 印象の評価方法及び装置
JP2001346627A (ja) * 2000-06-07 2001-12-18 Kao Corp 化粧アドバイスシステム
JP2002132916A (ja) * 2000-10-26 2002-05-10 Kao Corp メイクアップアドバイスの提供方法
JP2006024203A (ja) * 2004-06-10 2006-01-26 Miyuki Iino コーディネート支援システム
JP2010140100A (ja) * 2008-12-09 2010-06-24 Yurakusha:Kk 顔パターン分析システム
JP2013501292A (ja) * 2009-08-04 2013-01-10 ヴェサリス 基準画像に対して対象画像を補正する画像処理方法及びその画像処理装置

Also Published As

Publication number Publication date
CN108292418A (zh) 2018-07-17
CN108292418B (zh) 2022-04-26
JPWO2017103985A1 (ja) 2017-12-14
JP6028188B1 (ja) 2016-11-16

Similar Documents

Publication Publication Date Title
JP6715152B2 (ja) ケア情報取得方法、ケア情報共有方法及びこれらの方法のための電子装置
JP5290585B2 (ja) 肌色評価方法、肌色評価装置、肌色評価プログラム、及び該プログラムが記録された記録媒体
US9760935B2 (en) Method, system and computer program product for generating recommendations for products and treatments
US10255482B2 (en) Interactive display for facial skin monitoring
US9445087B2 (en) Systems, devices, and methods for providing products and consultations
US9563975B2 (en) Makeup support apparatus and method for supporting makeup
JP6128309B2 (ja) メイクアップ支援装置、メイクアップ支援方法、およびメイクアップ支援プログラム
JP4683200B2 (ja) 髪領域の自動抽出方法
WO2018076622A1 (fr) Procédé et dispositif de traitement d'image ainsi que terminal
JP2012113747A (ja) メイクアップシミュレーションシステム、メイクアップシミュレーション装置、メイクアップシミュレーション方法およびメイクアップシミュレーションプログラム
KR20100110793A (ko) 화장방법, 화장 시뮬레이션 장치 및 화장 시뮬레이션 프로그램을 기록한 컴퓨터로 읽을 수 있는 기록매체
JP2010017360A (ja) ゲーム装置、ゲーム制御方法、ゲーム制御プログラム、及び、該プログラムを記録した記録媒体
JP2009082338A (ja) エントロピーを用いた肌の鑑別方法
JP3920747B2 (ja) 画像処理装置
WO2023273247A1 (fr) Procédé et dispositif de traitement d'image de visage, support de stockage lisible par ordinateur, terminal
JP6028188B1 (ja) 情報提供装置及び情報提供方法
JP2016151490A (ja) メイクアップの評価方法、メイクアップの評価システム、及びメイクアップ製品の推奨方法
JP6128356B2 (ja) メイクアップ支援装置およびメイクアップ支援方法
JP4372494B2 (ja) 画像処理装置、画像処理方法、プログラム、記録媒体
JP6209298B1 (ja) 情報提供装置及び情報提供方法
JP6128357B2 (ja) メイクアップ支援装置およびメイクアップ支援方法
JP2004326488A (ja) シミュレーション画像生成サーバ、シミュレーション画像生成システム、シミュレーション画像生成方法及びプログラム
JP2017016418A (ja) ヘアスタイル提案システム
JP2019107071A (ja) 化粧アドバイス方法
JP2024500224A (ja) ヘアスタイリング分析の方法および装置

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2016543241

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15910676

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15910676

Country of ref document: EP

Kind code of ref document: A1