WO2017103985A1 - Information provision device and information provision method - Google Patents

Information provision device and information provision method Download PDF

Info

Publication number
WO2017103985A1
WO2017103985A1 PCT/JP2015/085022 JP2015085022W WO2017103985A1 WO 2017103985 A1 WO2017103985 A1 WO 2017103985A1 JP 2015085022 W JP2015085022 W JP 2015085022W WO 2017103985 A1 WO2017103985 A1 WO 2017103985A1
Authority
WO
WIPO (PCT)
Prior art keywords
face part
information
attribute
face
part attribute
Prior art date
Application number
PCT/JP2015/085022
Other languages
French (fr)
Japanese (ja)
Inventor
あゆみ 河野
順治 園田
Original Assignee
一般社団法人日本ファッションスタイリスト協会
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 一般社団法人日本ファッションスタイリスト協会 filed Critical 一般社団法人日本ファッションスタイリスト協会
Priority to PCT/JP2015/085022 priority Critical patent/WO2017103985A1/en
Priority to JP2016543241A priority patent/JP6028188B1/en
Priority to CN201580084856.4A priority patent/CN108292418B/en
Publication of WO2017103985A1 publication Critical patent/WO2017103985A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the present invention relates to an information providing apparatus and an information providing information method, and more particularly, systematically classified and accumulated based on measurement data and input data on attributes such as the shape and size of a part of a subject's face.
  • the present invention also relates to an information providing apparatus and an information providing method for providing an information group suitable for a target person from a plurality of information groups such as styling.
  • the present invention uses useful and accurate information (hereinafter referred to as “subject information”) such as styling that matches the subject with respect to the subject using the evaluation result obtained for the face part attribute of the subject. It is an object to provide an information providing apparatus and an information providing method capable of promptly providing at least one.
  • the object includes at least an attribute information acquisition unit, a storage unit, an evaluation unit, a compatible information category identification unit, an information selection unit, and an output unit
  • the attribute information acquisition unit is a target At least one selected face part selected from face parts including eyebrows, eyes, nose, mouth, skin, and hair constituting the person's own face, and the shape / size, color, and
  • the face part attribute including the texture and the measurement result of the face part attribute are acquired, and the storage unit belongs to a plurality of face part information, a plurality of tendencies, a plurality of matching information categories, and the respective matching information categories.
  • a plurality of target person information groups are stored, and each of the face part information includes at least one face part attribute defined in advance for each of the face parts and at least one reference for each face part attribute.
  • Each of the plurality of tendencies is related to the face part attribute and is defined as a straight axis including the face part attribute reference value in each face part attribute, and the at least It is stipulated to classify the plurality of target person information groups into at least two matching information categories with one face part attribute reference value as a boundary, and the evaluation unit is configured to select the selected face part through the attribute information acquisition unit.
  • the category specifying unit calls at least one tendency related to each face part attribute from the storage unit, and sets each inclination based on the comparison result. And extracting and specifying at least one relevant information category associated with the sex, and the information selection unit selects at least one subject information belonging to the identified relevant information category in the subject information group in the storage unit.
  • the output unit is achieved by an information providing device configured to output each selected target person information.
  • the object is also according to another aspect of the present invention, in which at least one selected facial part selected from facial parts including the eyebrows, eyes, nose, mouth, skin and hair constituting the face of the subject.
  • Attribute information acquisition for acquiring at least one face part attribute selected from the shape / size, color, and texture of the face part and a predetermined face part attribute measurement result defined for the face part attribute Receiving the process, the selected face part and the face part attribute measurement result, comparing the face part attribute reference value called from the storage unit based on the former and the latter, and comparing the comparison result with the matching information category extracting part
  • at least one tendency defined as a linear axis is called from the storage unit and the storage unit is associated with each tendency based on the comparison result.
  • a matching information category identifying step for extracting and identifying at least one matching information category, and a target for selecting at least one target information belonging to the specified matching information category from the target information group in the storage unit It is achieved by an information providing method comprising a person information selection step and an output step of outputting each selected target person information.
  • the above object is also achieved by a program for causing a computer to execute the information providing method of the present invention.
  • a subject who uses the apparatus selects at least one face part and face part attribute of the subject, and measures at least one face part attribute for the selected face part.
  • a plurality of matching information categories for classifying a plurality of target person information groups based on a comparison result between the face part attribute measurement result and the corresponding face part attribute reference value by inputting the result to the information providing device It is possible to specify at least one matching information category associated with at least one tendency from among the information, and output target person information belonging to the matching information category. Accordingly, it is possible to quickly provide the target person with appropriate target person information suitable for the target person using a computer.
  • FIG. 2 is a block diagram schematically showing a functional configuration example of an arithmetic control unit shown in FIG. 1. It is a block diagram which shows roughly an example of the memory
  • FIG. 1 is a block diagram schematically showing a hardware configuration example of the present embodiment
  • FIG. 2 is a block diagram schematically showing a functional configuration example of an arithmetic control unit shown in FIG. 1
  • FIG. 3 is shown in FIG. It is a block diagram which shows a memory
  • the information providing apparatus 1 of the present embodiment uses information useful for the target person such as statistically accumulated styling information in advance (hereinafter, this information is referred to as “target person information”, and a plurality of target person information is “targeted”. Information group))), and the shape, size, color, etc.
  • face part attribute value Based on measurement data and input data (hereinafter referred to as “face part attribute value”) about attributes such as texture (hereinafter referred to as “face part attribute value”), individual information unique to the target person is extracted, It is an apparatus that provides at least one subject information suitable for the subject from the subject information group.
  • the information providing apparatus 1 includes an arithmetic control unit 2, a storage unit 3, an input unit 5, an output unit 7, and an internal bus 4.
  • examples of the information providing apparatus 1 include a general-purpose personal computer, a tablet terminal, and a smartphone. These may be stand-alone terminals, or may be client terminals connected to a server via a telecommunication means (not shown) in a client / server system.
  • the storage unit 3 includes a storage unit (not shown) provided on the server side.
  • the internal bus 4 has a function of mutually connecting the above-described units, that is, the arithmetic control unit 2, the storage unit 3, the input unit 5, and the output unit 7.
  • the arithmetic control unit 2 performs various controls on each unit by a CPU or the like.
  • the arithmetic control unit 2 is configured by an arithmetic control device including a CPU, MPU, ROM, and the like in hardware.
  • the arithmetic control unit 2 executes a startup program stored in the ROM after the information providing apparatus of the present invention is turned on, and stores an operating system (OS), various processing drivers, and the present invention stored in the storage unit 3.
  • the program for executing the information providing method and various data are read into the RAM which is the main storage device, and the display information expanded in the RAM is output to the display 10 of the output unit 7 or to the printer 11. Or output. As shown in FIG.
  • the arithmetic control unit 2 in the present embodiment includes functional units such as an attribute information acquisition unit 21 including an image data analysis unit 25, an evaluation unit 22, a matching information category identification unit 23, and an information selection unit 24. It is configured. Each of these functional units will be described later.
  • the program and various data for executing the information providing method of the present invention may be stored in a recording medium such as a CD-R or DVD-R. Accordingly, the recording medium can be stored in the storage unit 3 as necessary.
  • the storage unit 3 temporarily stores or stores programs executed by the arithmetic control unit 2 and various data temporarily or permanently.
  • the storage unit 3 includes a volatile memory such as various RAMs, a magnetic disk such as a hard disk drive (HDD), an optical disk, and a secondary storage device such as a nonvolatile memory.
  • the storage unit 3 includes a reference face shape (size), a plurality of face part information, a plurality of tendencies, a plurality of matching information categories, a plurality of target person information groups, and a plurality of target person information groups.
  • Questionnaire information is stored in advance.
  • FIG. 4 shows an example of a two-dimensional reference face shape. This figure shows the reference face size.
  • the reference face size means a dimension or a product of the dimensions shown in any one of the following (1) to (4) in the reference face shape (see FIG. 4).
  • the face part information includes a face part, at least one face part attribute defined in advance for each of the face parts, and at least one reference value (hereinafter referred to as “face part attribute” defined in advance in each face part attribute). It is composed of a set of “reference value”.
  • FIG. 5 shows an example of face part information stored in the storage unit 3 of the present embodiment. As shown in this figure, the face part includes parts such as face eyes, nose, mouth, eyebrows, skin, hair, and hair. The facial part also includes a combination of at least two of these parts. In the present embodiment, the face part attribute defines three types of face part shape / size, color, and quality (texture).
  • This face part attribute can be additionally defined by the advancement of measurement technology in the future, and is not limited to these three types. Further, the facial part attribute reference value may be changed as appropriate because it may be changed due to future advancement of measurement technology or the like.
  • the face part attribute reference value is mainly quantitatively expressed by a numerical value, but may be expressed qualitatively by a value other than the numerical value.
  • Each face part attribute further includes two attributes in principle as shown in FIG.
  • the attribute is related to “straight-curve” whether linear or curved, and “thin-thick” whether thin or thick. Attributes.
  • the face part is a pupil and the face part attribute is a color, whether the pupil is yellowish or bluish, an attribute relating to “yellow-blue” and whether the skin color is light or dark
  • the attribute relating to “bright-dark” is defined.
  • the facial part is skin and the facial part attribute is texture, the attribute relating to “thin-thick”, whether the skin is thin or thick, and “matte-gloss ( Yes) ”is specified.
  • each attribute includes one attribute.
  • it is an attribute related to “gradation-contrast” indicating whether the color difference between the skin color and the hair is relatively smaller or larger than the facial part attribute reference value
  • the white eye portion and the black eye portion This is an attribute relating to “gentle-bright” indicating whether the saturation is relatively larger or smaller than the reference value (face part attribute reference value).
  • FIG. 5 also shows the relationship between the facial part attribute and the tendency.
  • the facial part attributes are classified into three tendencies. That is, the attributes related to “straight-curve”, “yellow-blue”, “flat-concave” are cool-warm tendencies, and attributes related to “thin-thick”, “small-large”, “thin-dark”. Corresponds to the light-deep tendency, and the attributes relating to “gradation-contrast” and “gentle-bright” correspond to the gradation-contrast tendency, respectively.
  • each tendency is related to the facial part information, and is named by the impression and image received from the two polar attributes in each facial part attribute, and is a linear axis (scale) that serves as an index of these bipolar attributes. Axis).
  • the face part attribute reference value is included in any intermediate position on the straight line.
  • the attributes of each facial part attribute are not limited to these, and further analysis / examination will further discover attributes that have a statistically significant association with the subject information group.
  • the attribute can be replaced with the current attribute or additionally employed.
  • the present invention is not limited to these tendencies, and further, by repeating statistical processing, knowledge about tendencies that are more relevant between the facial part attribute and the target person information group can be obtained. Needless to say, once obtained, the tendency can be newly added or changed.
  • the target person information group associates information recognized by the statistical processing that is highly related to the past general human face part information with each of the tendencies, and more specifically, with respect to each of the polar attributes of the tendencies. It is collected.
  • This target person information group includes styling (product) information (tie, shirt, suit pattern, color, etc.) suitable for the target person, information on the personality of the target person (gentle, energetic, etc.), information on customer service type (early Other information and advice are included (speak, speak, proactively explain, etc.).
  • Other information and advice include, for example, information about tableware; information about interiors such as furniture and interiors (colors of floors and walls, etc.); information about graphic designs such as logo colors; advertisements, promotions, product planning (colors of packages, etc.) ); Information on styling tests.
  • These target person information groups can be arranged one-dimensionally according to the strength of their attributes for each tendency.
  • the subject information group is classified into at least two categories related to the attributes of the two poles with at least one face part attribute reference value as a boundary according to any one tendency.
  • this category is referred to as a “conforming information category”.
  • the subject information group is divided into two matching information categories related to the attributes of both poles with the part attribute reference value as a boundary.
  • the subject information group is linear in one pole, a conforming information category (cool category) for attributes that give a cool impression such as blue, and a curvilinear in the other pole. It can be divided into conformity information categories (warm categories) for attributes that give a warm impression such as yellow.
  • the relevant information category for attributes that give light (lightness) impressions such as bright, small and light, and deep (heavy) such as dark, large and heavy
  • the conformity information category for attributes that give an impression, and for gradient-contrast tendencies
  • it has a gradation (gentle color impression) with a gradation such as gentle, familiarity, and matte (matte).
  • gradation category grade category
  • contrast information category contrast category
  • the relevant information category that is greater than or equal to the maximum face part attribute reference value, the second face part attribute reference value or more, and less than the maximum face part attribute reference value
  • each tendency has a single face part attribute reference value included therein.
  • FIG. 6 shows the relationship between the facial part attribute and tendency and the matching information category.
  • the cool-warm tendency is plotted on the horizontal axis
  • the light-deep trend is plotted on the vertical axis.
  • An example of the orthogonal coordinate system is shown.
  • a virtual axis of contrast-gradient tendency is provided in a 45 ° upward and downward right direction.
  • each face part attribute reference value is the origin and intersects at this origin.
  • the subject information group is two-dimensionally arranged according to the strength of the attribute from the origin, for example.
  • this orthogonal coordinate system is referred to as a “styling map”, and in particular, the first quadrant (I) is “bright taste”, the second quadrant (II) is “aqua taste”, and the third quadrant (III) is “ The “crystal taste” and the fourth quadrant (IV) will be called “artist”.
  • the first quadrant (bright taste) and the fourth quadrant (artist) become the warm category, and the second quadrant (aqua taste) and third quadrant (crystal taste) due to the cool-warm tendency on the horizontal axis. Is a cool category.
  • the first quadrant (bright taste) and the second quadrant (aqua taste) are in the light category
  • the third quadrant (crystal taste) and the fourth quadrant (artist) are in the cool category. It becomes.
  • the first quadrant (bright taste) and the third quadrant (crystal taste), which include a 45-degree diagonally upward virtual axis passing through the origin, are gradation categories due to the gradation-contrast tendency.
  • the second quadrant (aqua taste) and the fourth quadrant (artist), which include a hypothetical axis that is 45 degrees diagonally downward and passes through the origin, are the contrast category.
  • this styling map is suitable for explaining the relationship between the respective tendencies and the matching information category in the present embodiment, the relationship between the propensity and the matching information category has been described. Instead of using such a styling map, it can also be shown using other methods and structures, and is not limited to using a styling map. .
  • FIG. 7 summarizes the relationship between the conformity information categories classified by the three tendencies shown in FIG. 5 and the first to fourth quadrants shown in FIG. 6 in a table format.
  • each matching information category of each tendency is composed of a combination of two quadrants among the first to fourth quadrants.
  • each of the first to fourth quadrants is a part of three of the six relevant information categories. That is, the “bright taste” in the first quadrant is a part of each of the matching information categories of light, warm and contrast. Further, the “aqua taste” in the second quadrant is a part of each matching information category of light, cool and gradation. Furthermore, the “Crystal taste” in the third quadrant is part of the cool, deep and contrast fit information categories. Furthermore, the “artist” in the fourth quadrant is a part of the deep, warm, and gradation matching information categories.
  • FIG. 8 shows the relationship between each quadrant and the subject information group.
  • information on the customer service type, product (styling) and personality of the target person is posted as the target person information group, but is not limited thereto.
  • the target person information group is classified by the matching information category.
  • the relevant information category is a bright taste
  • “product with a casual feeling” as a product, “bright and cheerful” as a personality, and “speak early” as a customer service type are representatively cited.
  • the conformity information category is aqua taste, “good touch” as a product, “gentle and gentle” as a personality, and “listen to talk” as a customer service type are typically cited.
  • the conformity information category is a crystal taste, it is typically cited that the product is “shiny”, the personality is “energetic and quick”, and the customer service type is “quickly respond with quick action”. . Furthermore, when the conformity information category is “Artist”, “Deep things” as a product, “Peaceful, think carefully” as a personality, and “Provide instructions according to the procedure” as a customer service type are typically cited.
  • the storage unit 3 in the present embodiment can further store questionnaire information including a set of a plurality of questions for knowing the personality and behavior pattern of the target person and a plurality of answer sentences for the questions. .
  • An example of this questionnaire information is shown in FIG. As shown in this figure, a plurality of answer sentences for each question are provided corresponding to “bright taste”, “aqua taste”, “crystal taste”, and “art taste”, respectively.
  • the content of the questionnaire information is the question “personality (your personality)", for example, “I like helping people calmly and gently”, “energetic decision making and action is fast, focus on purpose and results”, Answers such as "I am cheerful and cheerful and value my completion and sensibility" and "I am calm and think carefully and act” are prepared.
  • the questionnaire information is configured so that the target person selects the items that are considered to correspond to him / herself.
  • the input unit 5 in the present embodiment includes an input terminal 9 and a device driver 6. Information input from the input terminal 9 or the device driver 6 is stored in the storage unit 3 via the internal bus 12.
  • the input terminal 9 inputs the selection result of the face part and the face part attribute by the operator (usually, but not limited to the target person), and answers to questionnaire information to be described later It is a device for selecting (multiple choices).
  • the subject selects the face part and the face part attribute with reference to the combination of the face part attributes shown in FIG. By combining at least two in this way, it is possible to further narrow down the matching information category so as to match the target person as will be described later, and to provide more appropriate target person information to the target person. This is because it can.
  • the input unit 5 mainly functions as the attribute information acquisition unit 21 (see FIG. 2).
  • the input terminal 9 is used mainly to input a selection instruction and necessary data.
  • Examples of the input terminal 9 include a mouse, a pointing device, a keyboard, and a touch panel.
  • the input terminal 9 can be used for a selection instruction or data input using a mouse cursor or a point on the selection screen displayed on the display 10 of the output unit 7. Can be used to input various data.
  • a selection instruction or data can be input using the operation input button on the display 10 or a display screen prepared separately.
  • the device driver 6 receives various data such as a facial part attribute measurement result, face image data, and 3D scanning data, which will be described later, from an external device such as a measurement device, a digital camera, a scanner device, or a 3D scanner device connected via the device driver 6. It is a driver to receive.
  • the driver 6 reads, for example, face image data photographed by a digital camera (not shown), 3D scanning data from a 3D scanner, measurement results for a predetermined attribute, and the like based on an instruction signal from the arithmetic control unit 2. Execute processing.
  • the output unit 7 in this embodiment includes a display 10 and a printer 11.
  • the output unit 7 displays various formats and data on the display 10 based on the instruction signal from the arithmetic control unit 2 or causes the printer 11 to print.
  • Various formats include a selection screen for the target person to select the face part and face part attribute, a target person information group selected by the information providing apparatus of the present embodiment, and a plurality of targets corresponding to the information group.
  • An output screen for outputting person information is included.
  • Examples of the display 10 include a liquid crystal monitor and a projector.
  • the arithmetic control unit 2 processes the program and various data for executing the information providing method of the present invention, thereby obtaining the attribute information acquisition unit 21, the evaluation unit 22, the conforming information category. Processing of each functional unit of the specifying unit 23 and the information selection unit 24 is executed. These processes are performed according to various command signals from the arithmetic control unit 2.
  • the units 21 to 24 are classified by name according to function for convenience of explanation, and do not limit the software configuration.
  • the present invention includes a form in which a part of the processing by these is executed by hardware mounted on the information providing apparatus of the present embodiment.
  • the attribute information acquisition unit 21 receives the selection result of the selected face part and face part attribute by the operation of the input device of the input unit 5 by the target person, or the external device (keyboard, touch panel, scanner, etc.) (not shown) via the device driver 6.
  • the attribute information acquisition unit 21 selects each selected face part. Is acquired as a facial part attribute measurement result and sent to the evaluation unit.
  • the face part attribute measurement result includes the face size Lm, Mm, Nm or Mm ⁇ Nm obtained in the same manner as the reference face size for the subject (for convenience of explanation, Lm corresponding to each of the reference face sizes) , Mm, Nm, etc., and m).
  • the attribute information acquisition unit 21 uses the color value in each selected face part as a face part attribute measurement result. Obtain it and send it to the evaluation unit 22.
  • the attribute information acquisition unit 21 calculates the color value measured for each of the white eye part and the black eye part in the selected face part. Obtained as a result and sent to the evaluation unit 22.
  • the attribute information acquisition unit 21 acquires a measurement value of the oil amount in the selected face part as a face part attribute measurement result and evaluates the part 22.
  • the skin texture evaluation method is not limited to the amount of oil in the skin, and other attributes such as the amount of skin moisture and the ratio between the amount of oil and the amount of moisture can also be employed.
  • the attribute information acquisition unit 21 acquires and evaluates the measured value of the diameter of the hair in the selected face part as a face part attribute measurement result. Send to part 22.
  • the attribute information acquisition unit 21 can also acquire face image data and 3D scanning data. Therefore, the attribute information acquisition unit 21 can include an image data analysis unit 25 to analyze the acquired face image data and 3D scanning data.
  • the face image data or the like may be the subject's own or a person other than the subject.
  • the image data analyzing unit 25 selects the selected face part from the acquired face image data by a conventionally known method. After detection, a predetermined attribute such as a dimension is measured, and the result is set as a facial part attribute measurement result.
  • the information providing apparatus 1 of the present invention provides information about the target person in the former case, and provides information about a person other than the target person in the latter case. become.
  • an image The data analysis unit 25 obtains a predetermined dimension related to the shape and size of the selected face part for the face image data sent from the attribute information acquisition part 21 and uses it as a face part attribute measurement result.
  • the image data analysis unit 25 measures the color value of the skin area excluding the hair part from the face image data to measure the face part attribute. The measurement result. Furthermore, when the selected face part is hair or pupil (black eye part or pupil) and the face part attribute is color, the image data analysis unit 25 measures the color value of the hair region or pupil from the face image data. Color the face part attribute measurement result. Furthermore, when the selected face part is an eye and the face part attribute is a color, the image data analysis unit 25 measures color values for each of the white eye part and the black eye part in the selected face part from the face image data. And the measurement result of the facial part attribute. Color measurement can be performed by a known color measurement method.
  • the colorimetric position in the hair region and the eyes (including both the white eye part and the black eye part) at the time of color measurement can be set by a known method so as to represent each of these areas. As long as a representative value can be obtained, the color measurement may be performed for only one point, or multiple points may be measured to obtain an average value thereof.
  • the image data analysis unit 25 uses the hair of the selected face part in the face image data (in this case, 3D scanning data is preferable). The diameter is measured, and this is used as the facial part attribute measurement result. Furthermore, in this case, the image data analysis unit 25 acquires a cross-sectional shape of the hair using a known image analysis method, and uses this as a facial part attribute measurement result. The cross-sectional shape of the hair is used because this shape is a factor in the formation of straight hairs and curly hairs, and whether the hair is straight or not has a great influence on the texture of the hair. These face part attribute measurement results are sent to the evaluation unit 22.
  • the evaluation unit 22 compares the face part attribute measurement result sent from the attribute information acquisition unit 21 with a face part attribute reference value called from the storage unit 3 to be described later, and compares the comparison result with the matching information category specifying unit 23 ( (To be described later). For example, when the selected face part is at least one of eyebrows, eyes, nose, and mouth, and the face part attribute is shape / size, the evaluation unit 22 sends each dimension ( The measured value in the image data analyzing unit 25 is also included) and the reference value of each dimension called from the storage unit 3 is compared. At this time, the evaluation unit 22 calls a reference face size L, M, N, or M ⁇ N (see FIG.
  • the face part attribute “shape / size” has two attributes of “straight (target) -curve (target)” and “thin-thick” (see FIG. 5).
  • the former attribute (“straight (linear) —curve (curve)”)
  • the facial part attribute measurement acquired by the evaluation unit 22 is performed.
  • the result (measured value) is a radius A (unit: cm) of an arc approximated by the upper edge of the eyebrows, and the face part attribute reference value called from the storage unit 3 is set to 5 cm.
  • the measured value of the radius A according to the normal method for the subject is, for example, 10 cm larger than this facial part attribute reference value after being multiplied by a correction value (M ⁇ N / (Mm ⁇ Nm); the same applies hereinafter) Since the shape of the eye of the subject is a straight landscape, it becomes “straight”, and in the case of 3 cm smaller than the face part attribute reference value, the height and width are narrow, so the “curve” It becomes the target. That is, in this case, the facial part attribute measurement result for the subject is the “straight-curve” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map (orthogonal coordinate system) shown in FIG. 6) on the face It belongs to the “straight line” side or the “curve” side with the site attribute reference value 5 cm as the boundary (origin).
  • the facial part attribute measurement results (measurement values) acquired by the evaluation unit 22 are as shown in FIG.
  • This is the ratio B (unit:%) of the width of the eyebrow on the eye side to the product M ⁇ N (see Fig. 4) obtained by multiplying the distance between the strokes by the vertical distance, and the reference value for the facial part attribute is set to 0.03% Has been.
  • the measured value of the width B of the subject by the usual method is 0.02% which is smaller than the reference value 0.03% after being multiplied by the correction value (M ⁇ N / (Mm ⁇ Nm)) as described above. Belongs to the “thin” side when it is 0.05% larger than the reference value of 0.03%.
  • the facial part attribute measurement result for the subject is the face part attribute reference value 0 on the “thin-thick” line (corresponding to the vertical axis (light-deep tendency) of the styling map shown in FIG. 6). It belongs to the “thin” side or “thick” side with 0.03% as the boundary (origin).
  • the face part attribute “shape / size” has two attributes of “straight (linear) or curved (curve)” and “small-large” (see FIG. 5).
  • the former attribute (“straight (target) —curve (target)”)
  • the facial part attribute measurement result is the product M ⁇ N (see FIG. 11A). 4)
  • the reference value of the facial part attribute is set to 0.05%.
  • the measured value of the above-mentioned ratio C of the subject by the usual method is 0.03% smaller than the reference value 0.05% after being multiplied by the correction value (M ⁇ N / (Mm ⁇ Nm))
  • the shape of the eye is horizontally long, it is like a “straight line”, and in the case of a large 0.08%, there is a height, so it is like a “curve”. That is, the facial part attribute measurement result for the subject in this case is the face part attribute reference value 0 on the “straight-curve” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6). .05% as a boundary (origin) and belonging to the “straight line” side or “curve” side.
  • the facial part attribute measurement result corresponds to the product M ⁇ N (see FIG. 4) as shown in FIG. It is the ratio D (unit:%) of the lateral width of the eye, and this facial part attribute reference value is set to 0.2%.
  • the measured value of the eye width ratio D of the subject is 0.18% smaller than the reference value (0.2%) after being multiplied by the correction value (M ⁇ N / (Mm ⁇ Nm)) Belongs to the “small” side, and in the case of a large 0.25%, it belongs to the “large” side.
  • the facial part attribute measurement result for the subject in this case is the face part attribute reference value 0 on the “small-large” line (corresponding to the vertical axis (light-deep tendency) of the styling map shown in FIG. 6). .2% of the border (origin) belongs to the “small” side or the “large” side.
  • the face part attribute “shape / size” has two attributes “straight (target) —curve (target)” and “small—large” as in the case of the eyes ( (See FIG. 5).
  • the facial part attribute measurement result is an arc that approximates the swelling of the nose as shown in FIG. Radius E (unit: cm), and the face part attribute reference value is set to 1 cm.
  • the facial part attribute measurement result for the subject in this case is the “curve-straight line” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6), and the facial part attribute reference value 1 cm. It belongs to either the “straight line” side or the “curve” side with the border (origin).
  • the facial part attribute measurement result is the product M ⁇ N (see FIG. 4) as shown in FIG. It is expressed by a width ratio F (unit:%) in the lateral direction of the nose, and its reference value is set to 0.2%.
  • the measured value of the ratio F of the subject is 0.18%, which is smaller than the facial part attribute reference value 0.2%, after being multiplied by the correction value (M ⁇ N / (Mm ⁇ Nm)), “ If it is 0.25% on the “small” side, it belongs to the “large” side.
  • the facial part attribute measurement result for the subject in this case is the face part attribute reference value 0 on the “small-large” line (corresponding to the vertical axis (light-deep tendency) of the styling map shown in FIG. 6). It belongs to the “small” side or the “large” side with a boundary (origin) of 2%.
  • the face part attribute “shape / size” has two attributes, “straight (target) —curve (target)” and “small—large”, as in the case of the eyes and nose. (See FIG. 5).
  • the facial part attribute measurement result is the product M ⁇ N (see FIG. 4) as shown in FIG.
  • the height ratio G of the mouth in the vertical direction (unit:%), and 0.1% is the reference value of the face part attribute.
  • the facial part attribute measurement result for the subject in this case is the face part attribute reference value 0 on the “curve-straight line” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6). It belongs to the “straight line” side or “curve” side with 1% as the boundary (origin).
  • the facial part attribute measurement result (measured value) is the product M ⁇ N (see FIG. 4), as shown in FIG. ) Of the width of the mouth in the horizontal direction (unit:%), and the reference value is set to 0.25%.
  • the measured value of the ratio H of the subject is 0.23% which is smaller than 0.25% of the facial part attribute reference value after being multiplied by the correction value (M ⁇ N / (Mm ⁇ Nm)), “ If it is 0.3%, it belongs to the “large” side.
  • the facial part attribute measurement result for the subject in this case is the face part attribute reference value 0 on the “small-large” line (corresponding to the horizontal axis (light-deep tendency) of the styling map shown in FIG. 6). It belongs to the “small” side or “large” side with 25% as the boundary (origin).
  • the evaluation unit 22 compares the face part attribute measurement result with the reference value of the color value called from the storage unit 3 to be described later.
  • the face part attributes include two attributes of “yellow (system) -blue (system)” and “bright-dark” (see FIG. 5).
  • the face part attribute measurement result is a color value obtained by measuring a part representing the skin color using a conventional method
  • the reference value is a CIE Lab color value. Is shown. In the following, an example in which CIE Lab color values are used as color values will be described. However, the present invention is not limited to this, and other color values such as Hunter Lab, RGB, CMYK, XYZ, or Lch can also be used. .
  • the measurement result of the facial part attribute for the subject in this case is the face part attribute reference value on the “yellow-blue” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6).
  • the boundary (origin) belongs to the “yellow” side or the “blue” side.
  • the evaluation unit 22 when the selected face part is the hair or the pupil (pupil) and the face part attribute is color, the evaluation unit 22 also calls the reference value of the color value called from the storage unit 3 (to be described later) Contrast with face part attribute reference value).
  • the face part attribute includes two attributes of “yellow (system) -blue (system)” and “bright-dark” (see FIG. 5).
  • the measurement result of the facial part attribute for the subject in this case is the face part attribute reference value on the “yellow-blue” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6).
  • the boundary (origin) belongs to the “yellow” side or the “blue” side.
  • the evaluation unit 22 obtains a difference in color value between the white-eye part and the black-eye part and calls it from the storage part. Also, it is compared with the reference value of the difference in color value.
  • the face part attribute reference value saturated
  • the face part attribute reference value saturated
  • the face part attribute measurement result includes a first 45-degree diagonal virtual axis (contrast tendency) that passes through the origin in the orthogonal coordinate system shown in FIG. It belongs to the quadrant and the third quadrant. Further, when the evaluation is gentle, it belongs to the second quadrant and the fourth quadrant that include a 45-degree diagonal virtual axis (gradient tendency) passing through the origin in the orthogonal coordinate system.
  • the evaluation unit 22 obtains the difference in color value between the skin part and the hair part, And the reference value of the difference between the color values called from the storage unit.
  • the facial part attribute reference value saturated
  • the facial part attribute reference value saturated
  • the face part attribute “texture” has two attributes of “thin-thick” and “matt (matte) -gloss (present)” (see FIG. 5).
  • the evaluation unit 22 uses the measured thickness value of the epidermis in the skin part. Contrast with the reference value of the thickness of the skin called from the storage unit 3.
  • the evaluation unit 22 acquires the thickness of the epidermis (unit: mm), and the epidermis thickness reference value (face part attribute reference value) called from the storage unit 3 is set to 0.2 mm.
  • the thickness of the epidermis measured for the subject is 0.1 mm, it is evaluated as “thin” because it is smaller than the reference value 0.2 mm.
  • the reference value 0 Since it is larger than 2 mm, it is evaluated as “thick”.
  • the facial part attribute value is on the “thin-thick” line (corresponding to the vertical axis (light-deep tendency) of the styling map shown in FIG. 6), and the boundary (origin) of the facial part attribute value is 0.2 mm. Thus, it belongs to the “thin” side or the “thick” side.
  • the evaluation unit 22 stores the skin oil amount for the subject. Contrast with the skin oil amount reference value called from 3. In this case, the reference value of the skin oil amount (facial part attribute reference value) is 12%. If the subject's measured skin oil content (facial part attribute measurement result) is 5%, it is lower than the standard value of 12%, so it is a mat, and conversely, if the measured skin oil content is 20%, the standard Since the value is higher than 12%, it is glossy.
  • the face part attribute measurement result in this case includes a second 45-degree hypothetical axis (gradient tendency) that passes through the origin in the orthogonal coordinate system of FIG. It belongs to the quadrant and the fourth quadrant. Further, when the evaluation is glossy, it belongs to the first quadrant and the third quadrant that include a 45-degree diagonal virtual axis (contrast tendency) that goes upward through the origin in the orthogonal coordinate system shown in FIG.
  • the face part attribute “texture” has two attributes “flat-uneven” and “thin-thick” (see FIG. 5).
  • the evaluation unit 22 obtains the cross-sectional shape of the hair as the facial part attribute measurement result from the storage unit 3 and the reference value of the cross-sectional shape. Contrast.
  • the cross-sectional shape (face part attribute reference value) called from the storage unit 3 is set to an ellipse.
  • the facial part attribute value is on the “flat-uneven” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6), and the facial part attribute reference value ellipse is the boundary (origin). If it is a more or less perfect circle, it belongs to the “flat” side, and if it is approximately a triangle, it belongs to the “uneven” side.
  • the evaluation unit 22 compares the hair diameter as the facial part attribute measurement result with the reference value of the hair diameter called from the storage unit 3.
  • the diameter reference value face part attribute reference value
  • the measured value of the subject's diameter is 0.06 mm, it is evaluated as “thin” because it is smaller than the reference value 0.08 mm, and conversely the diameter measured value is 0.1 mm. For example, since it is larger than the reference value 0.08 mm, it is evaluated as “thick”.
  • the facial part attribute measurement result is the face part attribute reference value on the “thin-thick” line (corresponding to the vertical axis (light-deep tendency) of the styling map shown in FIG. 6) of the orthogonal coordinate system of FIG. It belongs to the “thin” side or the “thick” side with 0.08 mm as the boundary (origin).
  • the evaluation unit 22 sets the selected face part and the face part attribute and the comparison result of the face part attribute measurement result to the corresponding information category.
  • the data is sent to the specifying unit 23.
  • the matching information category specifying unit 23 calls at least one predetermined tendency from each of the selected face part and the face part attribute selection results sent from the evaluation unit 22, and based on the comparison result of the face part attribute measurement results It is configured to identify at least one matching information category associated with each tendency and send the identification result to the information selection unit 24.
  • the matching information category specifying unit 23 calls a tendency corresponding to the face part attribute from the storage unit 3. Then, based on the facial part attribute measurement result, one of the two matching information categories classified by the tendency is specified, and the subject information belonging to the matching information category specified to the information selection unit 24 Issue a command to select a group.
  • the matching information category specifying unit 23 calls the cool-warm tendency from the storage unit 3, and on the styling map of FIG. 6, the second quadrant (aqua taste) and the third quadrant partitioned by the cool-warm tendency on the horizontal axis. (Crystal taste) is specified as the relevant information category. If the face part is skin, the face part attribute is the color (light and dark), and the evaluation result in the evaluation unit 22 is “bright”, the matching information category specifying unit 23 writes the light- Deep Tendency On the styling map of FIG.
  • the first quadrant (bright taste) and the second quadrant (aqua taste) partitioned by the light-deep tendency on the vertical axis are specified as the matching information category.
  • the face part attribute is the color (mild-bright)
  • the evaluation result in the evaluation unit 22 is “gentle”
  • the matching information category specifying unit 23 stores the storage unit
  • the gradation-contrast tendency is called from 3
  • the conformity information category specifying unit 23 causes at least two tendencies of the three tendencies.
  • the conformance information category is to be specified for each, the conformity information categories are superimposed on the styling map shown in FIG. 6 and the corresponding conformance information categories are identified to narrow down the category. Can be implemented. As a result, it becomes possible to provide a target information group that is more accurate (applicable to the target person) by narrowing down than those included in the compatible information category including any two quadrants in the case of one tendency. .
  • A When selecting two sets of attributes of one face part attribute for one selected face part For example, in FIG.
  • the selected face part is skin and the face part attribute “color” is 2
  • the matching information category specifying unit 23 is the first quadrant (bright taste) and the fourth quadrant (the former matching information category).
  • the first quadrant (bright taste) and the second quadrant (aqua taste), which are the latter category, are overlapped to narrow down to the first quadrant (bright taste) in which both overlap, and are specified as the matching information category.
  • the selected face part is skin and the face part attribute “color” is 2
  • the face part attribute “texture” attribute “matte” is further selected in addition to a set of attributes.
  • the matching information category specifying unit 23 is the first category (bright taste) and the second quadrant that are the former categories.
  • each face part attribute defined for a plurality of selected face parts For example, one selected face part is an eyebrow, and the face part attribute “shape / size” This is a case where the other selected face part is skin and the face part attribute is the color (light and dark).
  • the conformity information category specifying unit 23 performs the second quadrant (aqua taste) and the third quadrant (crystal) of the former conformance information category.
  • the first quadrant (bright taste) and the second quadrant (aqua taste) which are the latter category, are overlapped to specify the overlapping second quadrant (aqua taste).
  • the matching information category specifying unit 23 sends the matching information category specified here to the next information selection unit 24.
  • the matching information category identification unit 23 sends the plurality of identified matching information categories to the information selection unit 24 in that order, so that the most suitable target information (group), and the next target information ( Group), ..., etc., and has an advantage that can be provided to the subject.
  • the matching information category is specified as follows.
  • cool-warm propensity horizontal axis of styling map
  • light-deep propensity vertical axis of styling map
  • the relevant information category is specified by ranking according to the number of plots in the unit of the division, and this specification The results are sent to the information selection unit 24 in this order.
  • the section unit In order to identify the matching information category by ranking according to the number of plots, the identified result is sent to the information selection unit 24 in the ranking.
  • the information selection unit 24 selects at least one target person information belonging to the relevant information category specified based on the specification result from the relevant information category specification unit 23 from the target person information group in the storage unit 3 and outputs it.
  • the unit 25 is configured to output each selected target person information.
  • the image is displayed on the display and printed by the printer.
  • the output method is not limited to these, and another method may be used.
  • FIG. 14 is a flowchart showing the overall procedure of the information providing method using the information providing apparatus shown in FIG. 1, and FIG. 15 is a flowchart showing the procedure of measuring the facial part attribute value when the facial part attribute has a shape and size.
  • FIG. 16 is a flowchart showing a procedure for measuring a facial part attribute value when the facial part attribute is a color.
  • the subject first performs a gender input operation, so that the information providing apparatus 1 of the present invention acquires the gender from the input terminal 9 (step S1).
  • the calculation control unit 2 causes the display 10 to display a list of each face part and the like, and the subject uses the input terminal 9 to select at least one face part (eyes, nose, mouth, (Skin, hair) is selected (step S2). Thereby, the selected face part is determined.
  • the information providing apparatus 1 uses the function of the arithmetic control unit 2 to display the face part attributes (shape / size, color, quality (texture)) on the display 10 based on FIG. ) And the attributes shown in FIG. 5 are displayed so as to be selectable, and the target person selects a face part attribute from the attributes (step S3).
  • the information providing apparatus 1 of the present invention acquires the face part attribute measurement result by any one of the following methods (1) and (2) (step S4). Thereafter, the attribute information acquisition unit 21 sends the selection result of the acquired selected face part and face part attribute and the face part attribute measurement result to the evaluation unit 22.
  • the attribute information acquisition unit 21 first acquires face image data (step S21). Specifically, the arithmetic control unit 2 acquires a front image of the subject's face and, if necessary, face image data obtained by capturing a face image from a predetermined angle using a digital camera (not shown) or a 3D scanner device. The face image data is stored in the storage unit 3 via the device driver 6. The face image data only needs to be acquired and processed as digital data. The face image data is not limited to those captured by a digital camera or a 3D scanner device. It may be a thing.
  • the face contour and the hair portion are recognized from the face image data using a known recognition technique, and the face area is extracted (step S22).
  • the face area is defined as an area excluding the hair portion of the face.
  • the reference face size L of the reference face shape (see FIG. 4) is acquired from the storage unit 3, and the face size Lm of the face area is compared with the reference face size L acquired from the storage unit 3 for correction. A value is obtained, and size adjustment is performed by enlarging / reducing the face image data by this correction value (step S23).
  • step S24 based on the size-matched face image data, at least one selected face part is extracted (step S24) according to the selection by the subject (see step S2 in FIG. 14), and each selected face part is defined.
  • Each measured dimension (facial part attribute measurement result) is obtained from the face image data, and the face part attribute measurement result is obtained (step S25).
  • a face part attribute measurement result for another face part attribute from the input terminal 5 such as a touch panel. For example, when the face part is an eyebrow and the face part attribute is “straight-curve” in the “shape / size”, the radius of the arc of the eyebrow is measured as described above.
  • the flow of 3D scanning data is almost the same as that of the above-described 2D face image data.
  • the steps 22 and 23 may be performed in the reverse order. That is, it is possible to acquire the face part and face part attribute according to the selection result of the subject, obtain the face part attribute measurement result, and then multiply the measurement result by the correction value to perform size matching.
  • the attribute information acquisition unit 21 acquires the face image data and sends it to the image data analysis unit 25 (step S31).
  • the image data analysis unit 25 recognizes the outline of the face and the hair from the sent face image data, and extracts a face area (step S32).
  • the selected face portion is the color of the hair
  • the hair region may be extracted by recognizing the outline of the face and the hair from the face image data.
  • a color value measurement point is selected from the face area by an appropriate method (step S33).
  • the color value measurement point may be selected at one place or a plurality of places as long as a representative result is obtained. Then, the color value at the color value measurement point is measured (step S34). The measurement method is performed by a known method. Subsequently, the measured color measurement results (the result of averaging the color measurement results at each store when a plurality of measurement points are used) are stored in the storage unit 3 as face part attribute values (step S35). The color value can be obtained from 3D scanning data by the same method as described above.
  • step S5 it is confirmed by the input operation of the subject whether or not another face part is to be acquired.
  • the process returns to step S2 again, and as a result, a plurality of face parts can be selected. If another face part is not acquired, the process proceeds to step S6.
  • step S6 it is determined whether or not to conduct a questionnaire based on the selection operation by the input of the target person (step S6).
  • the attribute information acquisition unit 21 acquires a plurality of answer data of the questionnaire by the input to the input terminal 9 by the target person (step S7).
  • multiple items of questionnaire information may be displayed at a time, and the target person may select each answer sentence at one time. Answer sentences in a set) may be divided into multiple times to select answer sentences.
  • the attribute information acquisition unit 21 stores the acquired answer data in the storage unit 3.
  • the evaluation unit 22 performs the following evaluation process (step S8). That is, the selected face part, the face part attribute selection result, the face part attribute measurement result, and the face part attribute reference value are called from the storage unit 3, respectively, and the face part attribute measurement result and the face part attribute reference value are compared. In this way, one or more facial part attribute evaluation results are obtained.
  • the response data is called from the storage unit 3, and the response data is allocated to each of the four quadrants of bright taste, aqua taste, crystal taste, and artist. The number of data for each quadrant (category).
  • the evaluation unit 22 sends the comparison results and the aggregation results to the matching information category specifying unit 23.
  • the matching information category specifying unit 23 when there is one face part attribute selected from the evaluation part 22, at least one face part attribute reference value is set as a boundary due to the tendency defined for the face part attribute.
  • One of the two categories related to the attributes on both sides (the matching information category consisting of two of the four quadrants in the styling map shown in FIG. 6) is specified. Also, when two face part attributes defined for one face part are selected by the subject, or at least two belonging to different tendencies are selected from the face part attributes defined for a plurality of face parts. If there is a match, the matching information category specifying unit 23 superimposes the matching information categories by overlapping the matching information categories on the style map shown in FIG. 6 based on the tendency for each face part attribute and the comparison result in the evaluation unit 22.
  • step S9 It is specified as a high matching information category (step S9).
  • the matching information category with the largest number of duplicates includes useful information suitable for the subject.
  • the conformity information category identification unit 23 aggregates the aggregation results for the questionnaire for each quadrant on the styling map in the identification result, and the conformance that is highly compatible with the target in the order of the number of duplicates. It is specified as an information category (step S9).
  • the information selection unit 24 selects target person information from the matching information category (step S10), and outputs the selected target person information to the output unit 7 (step S11).
  • the read target person information group is sent to the output unit 7 and can be displayed on the display 10 or printed by the printer 12.
  • the information providing apparatus and the information providing method according to the present invention include a plurality of subjects (including face part attribute measurement results and face image data providers) constituting their faces as described above.
  • Useful styling (product) information (tie, shirt, suit pattern and color) suitable for the subject based on at least one face part attribute measurement result for at least one selected face part selected from , Cosmetics for makeup, etc.), information on the personality of the target person and customer service type, and other information and advice.
  • Other information and advice include, for example, information about tableware; information about interiors such as furniture and interiors (colors of floors and walls, etc.); information about graphic designs such as logo colors; advertisements, promotions, product planning (colors of packages, etc.) ); Information on styling tests.
  • the information providing apparatus and the information providing method of the present invention when an information group showing statistically high relevance to the face part and the face part attribute is obtained, the information group is classified for each relevant information category.
  • the information can be classified and included in the target person information group and used for subsequent information provision to the target person.
  • the face part attribute when a face part attribute that is statistically highly related to the target person information group stored in the storage unit in the information providing apparatus is confirmed, the face part attribute is set to a trend or a matching information category.
  • the information can be additionally stored in the facial part information of the storage unit in the information providing apparatus of the present invention in association with the information, and can be utilized for subsequent information provision to the subject.
  • a component can be deform
  • various inventions can be formed by appropriately combining a plurality of components disclosed in the embodiment. For example, some constituent elements can be removed from all the constituent elements shown in the embodiments, or constituent elements over different embodiments can be appropriately combined.

Abstract

[Problem] To provide an information provision device and information provision method which, using an evaluation result which is obtained for a face region attribute of a subject, is capable of promptly providing for the subject at least one instance of information (hereinafter "subject information") which is both useful and appropriate to such as styling which matches the subject. [Solution] This information provision device and information provision method: having selected at least one face region which configures the face of the subject, compares at least one face region attribute measurement result which has been acquired for each of the face region attributes with a reference value thereof; on the basis of the comparison result and at least one inclination which has been defined for the face region attribute, identifies a matching information category using the at least one inclination; selects a subject information group, from among a plurality of subject information groups, which is included in the matching information category; and outputs same.

Description

情報提供装置及び情報提供方法Information providing apparatus and information providing method
 本発明は、情報提供装置及び情報提供情報方法に係り、詳細には、対象者の顔のパーツの形や大きさなどの属性についての測定データや入力データに基づいて、体系的に分類蓄積された複数のスタイリングなどの情報群から対象者に適合する情報群を提供する情報提供装置及び情報提供方法に関する。 The present invention relates to an information providing apparatus and an information providing information method, and more particularly, systematically classified and accumulated based on measurement data and input data on attributes such as the shape and size of a part of a subject's face. The present invention also relates to an information providing apparatus and an information providing method for providing an information group suitable for a target person from a plurality of information groups such as styling.
 最近では、映画俳優やテレビ番組の出演者などに対してだけでなく、例えばPR用写真などの個人写真を撮影するような場面などで、イメージアップを図りたい個人に対して、スタイリストが、その人(以下、対象者という。)、その場面に合った衣装、服装やアクセサリー、小物などをコーディネートすることがある。スタイリストは、対象者の顔の形状、肌色、頭髪色、肌質、頭髪質など顔部位及び顔部位属性を全体的に観察したり、対象者との問診をしたりした後、そのスタイリストの経験上から分類整理したスタイリングに関する情報や知見の中から、対象者に適したアドバイスや情報(対象者情報)の提供を行っている。
 しかし、このようなスタイリストによるアドバイスや情報の提供では的確さが要求されるが、実際にはスタイリストの主観によるところが大きく、また、スタイリストの観察能力、経験、知見の不足などもあり、対象者に適したスタイリングのアドバイスや情報の提供が適切に行えないことが問題となっていた。
Recently, not only for movie actors and TV program performers, but also for stylists who want to improve their images in situations such as shooting personal photos such as PR photos. Coordinates people (hereinafter referred to as “subjects”), costumes, clothes, accessories, and accessories that suit the scene. The stylist can observe the face part and face part attributes such as the face shape, skin color, hair color, skin quality, and hair quality of the subject as a whole, and after interviewing the subject, the experience of the stylist We provide advice and information (target person information) suitable for the target person from the information and knowledge about styling classified and arranged from above.
However, the provision of advice and information by such stylists requires accuracy, but in reality, it depends largely on the stylist's subjectivity, and the stylist's observation ability, experience, lack of knowledge, etc. There was a problem that proper styling advice and information could not be provided properly.
 この問題を解決し、客観的、的確なアドバイスや情報の提供を行うためには、スタイリングやその他の情報をコンピュータを用いて管理などするシステムを構築することが必要となる。ところで、近年、コンピュータを用いた占いなどの娯楽に関する提案(例えば、特許文献1参照)や、美容分野におけるメイクアップなどに関する提案(例えば、特許文献2参照)がなされている。しかしながら、コンピュータを用い、対象者のスタイリング全般などに関するアドバイスや有用な情報を提供するシステムについてはまだ存在しない。 To solve this problem and provide objective and accurate advice and information, it is necessary to build a system that manages styling and other information using a computer. By the way, in recent years, proposals relating to entertainment such as fortune telling using a computer (for example, see Patent Document 1) and proposals regarding makeup in the beauty field (for example, see Patent Document 2) have been made. However, there is not yet a system that uses a computer to provide advice or useful information on the overall styling of the subject.
特開2005-352803号公報JP 2005-352803 A 特開2014-149841号公報JP 2014-149841 A
 本発明は、前記事情に鑑み、対象者の顔部位属性について得られる評価結果を用いて、対象者に対して当該対象者に適合するスタイリングなどの有用かつ的確な情報(以下、対象者情報という。)を少なくとも1つ迅速に提供できる情報提供装置及び情報提供方法を提供することを目的とする。 In view of the circumstances described above, the present invention uses useful and accurate information (hereinafter referred to as “subject information”) such as styling that matches the subject with respect to the subject using the evaluation result obtained for the face part attribute of the subject. It is an object to provide an information providing apparatus and an information providing method capable of promptly providing at least one.
 前記目的は、本発明の一局面によれば、属性情報取得部、記憶部、評価部、適合情報カテゴリー特定部、情報選出部及び出力部を少なくとも含んでおり、前記属性情報取得部は、対象者自身の顔を構成する眉、目、鼻、口、肌及び頭髪を含む顔部位の中から選択した少なくとも1つの選択顔部位、前記選択顔部位について規定されている形状・大きさ、色及び質感を含む顔部位属性及び当該顔部位属性の測定結果を取得し、前記記憶部は、複数の顔部位情報と、複数の傾向性と、複数の適合情報カテゴリーと、当該各適合情報カテゴリーに属する複数の対象者情報群とを記憶し、前記顔部位情報のそれぞれは、前記顔部位のそれぞれについて予め規定された少なくとも1つの顔部位属性と、当該各顔部位属性の基準となる少なくとも1つの顔部位属性基準値とを含み、前記複数の傾向性はそれぞれ前記顔部位属性に関連し、前記各顔部位属性において前記顔部位属性基準値を含む直線軸として規定され、当該直線軸上において前記少なくとも1つの顔部位属性基準値を境にして前記複数の対象者情報群を少なくとも2つの適合情報カテゴリーに分類するように規定されており、前記評価部は、前記属性情報取得部を通じて前記選択顔部位及び前記顔部位属性測定結果を取得し、前者に基づいて前記記憶部から呼び出した前記顔部位属性基準値と後者とを対比し、当該対比結果を前記適合情報カテゴリー抽出部に送り、前記適合情報カテゴリー特定部は、前記記憶部から、前記各顔部位属性に関連する少なくとも1つの傾向性を呼び出すとともに、前記対比結果に基づいて当該各傾向性に関連づけられた少なくとも1つの適合情報カテゴリーを抽出して特定し、前記情報選出部は、特定された当該適合情報カテゴリーに属する少なくとも1つの対象者情報を前記記憶部における対象者情報群の中から選出し、前記出力部は、当該選出された各対象者情報を出力するように構成されてなることを特徴とする情報提供装置によって達成される。 According to one aspect of the present invention, the object includes at least an attribute information acquisition unit, a storage unit, an evaluation unit, a compatible information category identification unit, an information selection unit, and an output unit, and the attribute information acquisition unit is a target At least one selected face part selected from face parts including eyebrows, eyes, nose, mouth, skin, and hair constituting the person's own face, and the shape / size, color, and The face part attribute including the texture and the measurement result of the face part attribute are acquired, and the storage unit belongs to a plurality of face part information, a plurality of tendencies, a plurality of matching information categories, and the respective matching information categories. A plurality of target person information groups are stored, and each of the face part information includes at least one face part attribute defined in advance for each of the face parts and at least one reference for each face part attribute. Each of the plurality of tendencies is related to the face part attribute and is defined as a straight axis including the face part attribute reference value in each face part attribute, and the at least It is stipulated to classify the plurality of target person information groups into at least two matching information categories with one face part attribute reference value as a boundary, and the evaluation unit is configured to select the selected face part through the attribute information acquisition unit. And the face part attribute measurement result is acquired, the face part attribute reference value called from the storage unit based on the former is compared with the latter, the comparison result is sent to the matching information category extracting unit, and the matching information The category specifying unit calls at least one tendency related to each face part attribute from the storage unit, and sets each inclination based on the comparison result. And extracting and specifying at least one relevant information category associated with the sex, and the information selection unit selects at least one subject information belonging to the identified relevant information category in the subject information group in the storage unit. The output unit is achieved by an information providing device configured to output each selected target person information.
 前記目的はまた、本発明の別の局面によれば、対象者が自身の顔を構成する眉、目、鼻、口、肌及び頭髪を含む顔部位の中から選択した少なくとも1つの選択顔部位と、当該顔部位について形状・大きさ、色及び質感の中から選択された少なくとも1つの顔部位属性と、当該顔部位属性について規定された所定の顔部位属性測定結果とを取得する属性情報取得工程と、前記選択顔部位及び前記顔部位属性測定結果の入力を受け、前者に基づいて記憶部から呼び出した顔部位属性基準値と後者とを対比し、当該対比結果を前記適合情報カテゴリー抽出部に送る評価工程と、前記記憶部から、前記各顔部位属性に関連し、直線軸として規定される少なくとも1つの傾向性を呼び出すとともに、前記対比結果に基づいて当該各傾向性に関連づけられた少なくとも1つの適合情報カテゴリーを抽出して特定する適合情報カテゴリー特定工程と、特定された当該適合情報カテゴリーに属する少なくとも1つの対象者情報を前記記憶部における対象者情報群の中から選出する対象者情報選出工程と、当該選出された各対象者情報を出力する出力工程とを含むことを特徴とする情報提供方法によって達成される。 The object is also according to another aspect of the present invention, in which at least one selected facial part selected from facial parts including the eyebrows, eyes, nose, mouth, skin and hair constituting the face of the subject. Attribute information acquisition for acquiring at least one face part attribute selected from the shape / size, color, and texture of the face part and a predetermined face part attribute measurement result defined for the face part attribute Receiving the process, the selected face part and the face part attribute measurement result, comparing the face part attribute reference value called from the storage unit based on the former and the latter, and comparing the comparison result with the matching information category extracting part And at least one tendency defined as a linear axis is called from the storage unit and the storage unit is associated with each tendency based on the comparison result. A matching information category identifying step for extracting and identifying at least one matching information category, and a target for selecting at least one target information belonging to the specified matching information category from the target information group in the storage unit It is achieved by an information providing method comprising a person information selection step and an output step of outputting each selected target person information.
 前記目的はまた、本発明の情報提供方法をコンピュータに実行させるためのプログラムによって達成される。 The above object is also achieved by a program for causing a computer to execute the information providing method of the present invention.
 本発明の情報提供装置及び情報提供方法によれば、当該装置を使用する対象者が自身の顔部位及び顔部位属性を少なくとも1つ選択し、選択した顔部位についての少なくとも1つの顔部位属性測定結果を前記情報提供装置に入力することで、当該顔部位属性測定結果とこれに対応する顔部位属性基準値との対比結果に基づいて、複数の対象者情報群を分類する複数の適合情報カテゴリーの中から少なくとも1つの傾向性に関連付けられた少なくとも1つの適合情報カテゴリーを特定し、この適合情報カテゴリーに属する対象者情報を出力できる。これにより、コンピュータを用いて対象者に適合する的確な対象者情報を迅速に当該対象者に提供できる。 According to the information providing apparatus and the information providing method of the present invention, a subject who uses the apparatus selects at least one face part and face part attribute of the subject, and measures at least one face part attribute for the selected face part. A plurality of matching information categories for classifying a plurality of target person information groups based on a comparison result between the face part attribute measurement result and the corresponding face part attribute reference value by inputting the result to the information providing device It is possible to specify at least one matching information category associated with at least one tendency from among the information, and output target person information belonging to the matching information category. Accordingly, it is possible to quickly provide the target person with appropriate target person information suitable for the target person using a computer.
本発明の情報提供装置の一実施形態のハードウェア構成例を概略的に示すブロック図である。It is a block diagram which shows roughly the hardware structural example of one Embodiment of the information provision apparatus of this invention. 図1に示す演算制御部の機能構成例を概略的に示すブロック図である。FIG. 2 is a block diagram schematically showing a functional configuration example of an arithmetic control unit shown in FIG. 1. 図1に示す記憶部の一例を概略的に示すブロック図である。It is a block diagram which shows roughly an example of the memory | storage part shown in FIG. 基準となる顔形状及び基準顔サイズを説明するための図である。It is a figure for demonstrating the face shape used as a reference | standard, and a reference | standard face size. 顔部位属性、及びこれと傾向性との関係の一例を示す表である。It is a table | surface which shows an example of the relationship between a face part attribute and this and tendency. 顔部位属性及び傾向性と適合情報カテゴリーとの関係を示した直交座標系(スタイリングマップ)の一例を示す図である。It is a figure which shows an example of the orthogonal coordinate system (styling map) which showed the relationship between a face part attribute and tendency, and a matching information category. 傾向性と適合情報カテゴリーとの関係の一例を説明するための表である。It is a table | surface for demonstrating an example of the relationship between a tendency and a compatible information category. 適合情報カテゴリーと対象者情報群との関係を示した表である。It is the table | surface which showed the relationship between a conformity information category and a subject information group. アンケート情報の一例を示した表である。It is the table | surface which showed an example of questionnaire information. 眉の顔部位属性における測定内容を説明するための概念図である。It is a conceptual diagram for demonstrating the measurement content in the face part attribute of an eyebrow. 目の顔部位属性における測定内容を説明するための概念図である。It is a conceptual diagram for demonstrating the measurement content in the face part attribute of eyes. 鼻の顔部位属性における測定内容を説明するための概念図である。It is a conceptual diagram for demonstrating the measurement content in a nose face part attribute. 口の顔部位属性における測定内容を説明するための概念図である。It is a conceptual diagram for demonstrating the measurement content in the face part attribute of a mouth. 図1に示す情報提供装置を用いた情報提供方法の手順の一例を示すフローチャートである。It is a flowchart which shows an example of the procedure of the information provision method using the information provision apparatus shown in FIG. 顔部位属性が形・大きさである場合の顔部位属性測定結果を得るまでの手順を示すフローチャートである。It is a flowchart which shows the procedure until it obtains a face part attribute measurement result in case a face part attribute is a shape and a magnitude | size. 顔部位属性が色である場合の顔部位属性測定結果を得るまでの手順を示すフローチャートである。It is a flowchart which shows the procedure until it obtains a face part attribute measurement result in case a face part attribute is a color.
本発明の情報提供装置
 添付の図1~図13を参照しながら、本発明の情報提供装置の一実施形態について詳細に説明する。
[ハードウェア構成例]
 図1は、本実施形態のハードウェア構成例を概略的に示すブロック図、図2は図1に示す演算制御部の機能構成例を概略的に示すブロック図、また図3は図1に示す記憶部を概略的に示すブロック図である。本実施形態の情報提供装置1は、予め、統計的に蓄積されたスタイリング情報などの対象者にとって有用な情報(以下、この情報を「対象者情報」といい、複数の対象者情報を「対象者情報群」ということがある。)を蓄えておき、対象者の顔の目、鼻、口、眉、肌などのパーツ(以下、「顔部位」という。)の形・大きさ、色、質感などの属性(以下、「顔部位属性」という。)についての測定データや入力データ(以下、「顔部位属性値」という。)に基づいて、対象者個人の固有の情報を抽出し、複数の対象者情報群から対象者に適した少なくとも1つの対象者情報を提供する装置である。
An embodiment of the information providing apparatus of the present invention will be described in detail with reference to FIGS. 1 to 13 attached to the information providing apparatus of the present invention.
[Hardware configuration example]
1 is a block diagram schematically showing a hardware configuration example of the present embodiment, FIG. 2 is a block diagram schematically showing a functional configuration example of an arithmetic control unit shown in FIG. 1, and FIG. 3 is shown in FIG. It is a block diagram which shows a memory | storage part schematically. The information providing apparatus 1 of the present embodiment uses information useful for the target person such as statistically accumulated styling information in advance (hereinafter, this information is referred to as “target person information”, and a plurality of target person information is “targeted”. Information group))), and the shape, size, color, etc. of the parts of the subject's face such as the eyes, nose, mouth, eyebrows, and skin (hereinafter referred to as “face part”) Based on measurement data and input data (hereinafter referred to as “face part attribute value”) about attributes such as texture (hereinafter referred to as “face part attribute value”), individual information unique to the target person is extracted, It is an apparatus that provides at least one subject information suitable for the subject from the subject information group.
 情報提供装置1は、演算制御部2と、記憶部3と、入力部5と、出力部7と、内部バス4とを含む。ここで、情報提供装置1としては、汎用のパーソナルコンピュータ、タブレット端末、スマートフォンなどが挙げられる。これらは、スタンドアロンタイプの端末であってもよく、クライアント・サーバシステムにおいて、不図示の通信部を介してサーバと電気通信手段を介して接続されるクライアント端末であってもよい。後者の場合、記憶部3には、サーバ側が備える記憶部(不図示)が含まれるものとする。なお、内部バス4は、これを除く前記各部、即ち演算制御部2、記憶部3、入力部5及び出力部7を相互に接続する機能を備えている。 The information providing apparatus 1 includes an arithmetic control unit 2, a storage unit 3, an input unit 5, an output unit 7, and an internal bus 4. Here, examples of the information providing apparatus 1 include a general-purpose personal computer, a tablet terminal, and a smartphone. These may be stand-alone terminals, or may be client terminals connected to a server via a telecommunication means (not shown) in a client / server system. In the latter case, the storage unit 3 includes a storage unit (not shown) provided on the server side. The internal bus 4 has a function of mutually connecting the above-described units, that is, the arithmetic control unit 2, the storage unit 3, the input unit 5, and the output unit 7.
 演算制御部2は、CPUなどによって各部に対して各種制御を担う。演算制御部2は、ハードウェア的には、CPU,MPU,ROMなどを含む演算制御装置で構成されている。演算制御部2は、本発明の情報提供装置の電源ON後にROMに記憶されている起動用プログラムを実行し、記憶部3内に格納されているオペレーティングシステム(OS)、各種処理ドライバ及び本発明の情報提供方法を実行するためのプログラム及び各種データを主記憶装置であるRAMなどに読み込みをすると共に、RAMなどに展開された表示情報を出力部7のディスプレイ10に出力したり、プリンタ11に出力したりする。本実施形態における演算制御部2は、図2に示すように、画像データ解析部25を含む属性情報取得部21、評価部22、適合情報カテゴリー特定部23及び情報選出部24の各機能部で構成されている。これら各機能部については後述する。
 なお、本発明の情報提供方法を実行するためのプログラム及び各種データは、例えばCD―R、DVD-Rなどの記録媒体に保存されていてもよく、その場合、これらは演算制御部2の指示に従い当該記録媒体から必要に応じて記憶部3に格納することもできる。
The arithmetic control unit 2 performs various controls on each unit by a CPU or the like. The arithmetic control unit 2 is configured by an arithmetic control device including a CPU, MPU, ROM, and the like in hardware. The arithmetic control unit 2 executes a startup program stored in the ROM after the information providing apparatus of the present invention is turned on, and stores an operating system (OS), various processing drivers, and the present invention stored in the storage unit 3. The program for executing the information providing method and various data are read into the RAM which is the main storage device, and the display information expanded in the RAM is output to the display 10 of the output unit 7 or to the printer 11. Or output. As shown in FIG. 2, the arithmetic control unit 2 in the present embodiment includes functional units such as an attribute information acquisition unit 21 including an image data analysis unit 25, an evaluation unit 22, a matching information category identification unit 23, and an information selection unit 24. It is configured. Each of these functional units will be described later.
Note that the program and various data for executing the information providing method of the present invention may be stored in a recording medium such as a CD-R or DVD-R. Accordingly, the recording medium can be stored in the storage unit 3 as necessary.
 記憶部3は、演算制御部2によって実行されるプログラムや各種データなどを一時的又は恒久的に保存、格納する。記憶部3は、各種RAMなどの揮発メモリ、ハードディスクドライブ(HDD)などの磁気ディスク、光ディスク、不揮発性メモリなどの副記憶装置で構成されている。記憶部3には、図3に示すように、基準顔形状(サイズ)と、複数の顔部位情報と、複数の傾向性と、複数の適合情報カテゴリーと、複数の対象者情報群と、複数のアンケート情報とが予め記憶されている。 The storage unit 3 temporarily stores or stores programs executed by the arithmetic control unit 2 and various data temporarily or permanently. The storage unit 3 includes a volatile memory such as various RAMs, a magnetic disk such as a hard disk drive (HDD), an optical disk, and a secondary storage device such as a nonvolatile memory. As shown in FIG. 3, the storage unit 3 includes a reference face shape (size), a plurality of face part information, a plurality of tendencies, a plurality of matching information categories, a plurality of target person information groups, and a plurality of target person information groups. Questionnaire information is stored in advance.
(基準顔形状(サイズ))
 記憶部3には、過去に蓄積された複数の人間の顔形状を統計的に処理して基準的な顔形状として得られた2次元及び/又は3次元の基準顔形状が、その基準顔サイズとともに記憶されている。これらのうち、2次元の基準顔形状の一例を示した図を図4に示す。この図に基準顔サイズが示されている。ここで、基準顔サイズとは、基準顔形状における以下の(1)~(4)のいずれかに示す寸法又は寸法の積をいう(図4参照)。
(1)正中矢状面上の髪の生え際と顎先端との間の上下距離 L(「正面上下距離」という。図4、符号Lの距離)
(2)左右の揉み上げの頬側の生え際間の距離 M(「左右のもみ上げ間の距離」という。図4中、符号Mの距離)
(3)左右の眉毛の上側の生え際と顎先端との間の上下距離 N(図4中、符号Nの距離)
(4)上記のMとNとの積(面積) M・N
(Reference face shape (size))
In the storage unit 3, two-dimensional and / or three-dimensional reference face shapes obtained as a reference face shape by statistically processing a plurality of human face shapes accumulated in the past are displayed as reference face sizes. It is remembered with. Of these, FIG. 4 shows an example of a two-dimensional reference face shape. This figure shows the reference face size. Here, the reference face size means a dimension or a product of the dimensions shown in any one of the following (1) to (4) in the reference face shape (see FIG. 4).
(1) Vertical distance L between the hairline on the mid-sagittal plane and the tip of the chin (referred to as “frontal vertical distance”; distance of L in FIG. 4)
(2) Distance M between cheek sides of left and right strokes (referred to as “distance between left and right side lifts”; distance indicated by symbol M in FIG. 4)
(3) Vertical distance N between the upper hairline of the left and right eyebrows and the tip of the jaw (distance N in FIG. 4)
(4) Product (area) of M and N above M · N
(顔部位情報)
 顔部位情報は、顔部位と、当該顔部位のそれぞれについて予め規定された少なくとも1つの顔部位属性と、当該各顔部位属性において予め規定されている少なくとも1つの基準値(以下、「顔部位属性基準値」という。)との組から成り立っている。図5は、本実施形態の記憶部3に記憶されている顔部位情報の一例を示している。この図に示すように、顔部位には、顔の目、鼻、口、眉、肌、頭髪などのパーツが含まれる他、頭髪も含まれるものとする。顔部位にはまた、これらのパーツの少なくとも2つの組み合わせも含まれるものとする。また、顔部位属性は、本実施形態では顔部位の形・大きさ、色及び質(質感)の3種類を規定する。この顔部位属性は、今後の測定技術の進歩などにより追加的に規定可能であり、これら3種類に限定されるわけではない。また、顔部位属性基準値は、今後の測定技術の進歩などにより変更される可能性があるので、適宜更新可能とされている。なお、顔部位属性基準値は主として数値で定量的に表されるが、数値以外で定性的に表されるものであってもよい。
(Face part information)
The face part information includes a face part, at least one face part attribute defined in advance for each of the face parts, and at least one reference value (hereinafter referred to as “face part attribute” defined in advance in each face part attribute). It is composed of a set of “reference value”. FIG. 5 shows an example of face part information stored in the storage unit 3 of the present embodiment. As shown in this figure, the face part includes parts such as face eyes, nose, mouth, eyebrows, skin, hair, and hair. The facial part also includes a combination of at least two of these parts. In the present embodiment, the face part attribute defines three types of face part shape / size, color, and quality (texture). This face part attribute can be additionally defined by the advancement of measurement technology in the future, and is not limited to these three types. Further, the facial part attribute reference value may be changed as appropriate because it may be changed due to future advancement of measurement technology or the like. The face part attribute reference value is mainly quantitatively expressed by a numerical value, but may be expressed qualitatively by a value other than the numerical value.
 それぞれの顔部位属性はさらに、図5に示すように、原則として2つの属性を含んでいる。例えば、顔部位が眉であり顔部位属性が形・大きさの場合には、直線的か曲線的かという「直線-曲線」に係る属性と、細いか太いかという「細い-太い」に係る属性とが規定されている。また、顔部位が瞳であり顔部位属性が色の場合には、瞳が黄色味を帯びているか青味を帯びているかという「イエロー-ブルー」に係る属性と、肌色が明るいか暗いかという「明るい-暗い」に係る属性とが規定されている。また、顔部位が肌であり顔部位属性が質感の場合には、皮膚の表皮が薄いか厚いかという「薄い-厚い」に係る属性と、肌にツヤがあるかないかという「マット-ツヤ(有り)」に係る属性とが規定されている。 Each face part attribute further includes two attributes in principle as shown in FIG. For example, when the face part is an eyebrow and the face part attribute is shape / size, the attribute is related to “straight-curve” whether linear or curved, and “thin-thick” whether thin or thick. Attributes. In addition, when the face part is a pupil and the face part attribute is a color, whether the pupil is yellowish or bluish, an attribute relating to “yellow-blue” and whether the skin color is light or dark The attribute relating to “bright-dark” is defined. In addition, when the facial part is skin and the facial part attribute is texture, the attribute relating to “thin-thick”, whether the skin is thin or thick, and “matte-gloss ( Yes) ”is specified.
 一方、顔部位が肌及び頭髪であり顔部位属性が色の場合、また顔部位が目であり顔部位が色の場合はそれぞれ例外的に1つの属性を含んでいる。前者の場合は、肌色と頭髪との色差が顔部位属性基準値よりも相対的に小さいか大きいかを示す「グラデーション-コントラスト」に係る属性であり、後者の場合は白目部分と黒目部分との彩度がその基準値(顔部位属性規準値)よりも相対的に大きいか小さいかを示す「穏やか-鮮やか」に係る属性である。 On the other hand, when the face part is skin and hair and the face part attribute is color, and when the face part is eye and the face part is color, each attribute includes one attribute. In the former case, it is an attribute related to “gradation-contrast” indicating whether the color difference between the skin color and the hair is relatively smaller or larger than the facial part attribute reference value, and in the latter case, the white eye portion and the black eye portion This is an attribute relating to “gentle-bright” indicating whether the saturation is relatively larger or smaller than the reference value (face part attribute reference value).
(傾向性)
 図5はまた、顔部位属性と傾向性との関係をも示している。この図に示すように、顔部位属性は、3つの傾向性に分類される。即ち、「直線-曲線」、「イエロー-ブルー」、「フラット-凹凸」に係る属性はクール-ウォーム傾向性に、「細い-太い」、「小さい-大きい」、「薄い-濃い」に係る属性はライト-ディープ傾向性に、「グラデーション-コントラスト」、「穏やか-鮮やか」に係る属性はグラデーション-コントラスト傾向性にそれぞれ対応することになる。このようにそれぞれの傾向性は前記顔部位情報に関連しており、前記各顔部位属性における両極の2つの属性から受ける印象やイメージによって命名され、これら両極の属性の指標となる直線軸(スケール軸)として表される。そして、前記顔部位属性基準値はこの直線上の中間の任意の位置に含まれることになる。なお、それぞれの顔部位属性における属性は、これらに限定されるものではなく、今後の解析・検討により、対象者情報群との間で統計的に有意な関連が認められる属性がさらに発見されれば、当該属性を現行の属性に代え、又は追加的に採用することは可能である。また、本発明においては、これらの傾向性に限定されるものではなく、さらに統計的な処理を繰り返すことで、顔部位属性と対象者情報群との間により関連性の高い傾向性に関する知見が得られれば、新たに傾向性を追加し又は変更できることは言うまでもない。
(Tendency)
FIG. 5 also shows the relationship between the facial part attribute and the tendency. As shown in this figure, the facial part attributes are classified into three tendencies. That is, the attributes related to “straight-curve”, “yellow-blue”, “flat-concave” are cool-warm tendencies, and attributes related to “thin-thick”, “small-large”, “thin-dark”. Corresponds to the light-deep tendency, and the attributes relating to “gradation-contrast” and “gentle-bright” correspond to the gradation-contrast tendency, respectively. In this way, each tendency is related to the facial part information, and is named by the impression and image received from the two polar attributes in each facial part attribute, and is a linear axis (scale) that serves as an index of these bipolar attributes. Axis). The face part attribute reference value is included in any intermediate position on the straight line. Note that the attributes of each facial part attribute are not limited to these, and further analysis / examination will further discover attributes that have a statistically significant association with the subject information group. For example, the attribute can be replaced with the current attribute or additionally employed. Further, in the present invention, the present invention is not limited to these tendencies, and further, by repeating statistical processing, knowledge about tendencies that are more relevant between the facial part attribute and the target person information group can be obtained. Needless to say, once obtained, the tendency can be newly added or changed.
(対象者情報群)
 対象者情報群は、統計的処理により過去の一般人の顔部位情報と非常に関連が高いと認められる情報を前記各傾向性、さらに言えば、当該各傾向性の両極の属性のそれぞれに関連付けて収集されたものである。この対象者情報群には、対象者に見合うスタイリング(商品)情報(ネクタイ、シャツ、スーツの柄、色など)、対象者の性格に関する情報(穏やか、エネルギッシュなど)、接客タイプに関する情報(早めに声を掛ける、話を聞き出す、積極的に説明するなど)その他の情報やアドバイスなどが含まれている。その他の情報やアドバイスとしては、例えば食器に関する情報;家具、内装(床や壁の色など)などのインテリアに関する情報;ロゴの色などのグラフィックデザインに関する情報;広告、プロモーション、商品企画(パッケージの色など)に関する情報;スタイリングに関する検定の教材などが挙げられる。これらの対象者情報群は、それぞれの傾向性についてその属性の強弱によって1次元的に配置することができる。
(Target information group)
The target person information group associates information recognized by the statistical processing that is highly related to the past general human face part information with each of the tendencies, and more specifically, with respect to each of the polar attributes of the tendencies. It is collected. This target person information group includes styling (product) information (tie, shirt, suit pattern, color, etc.) suitable for the target person, information on the personality of the target person (gentle, energetic, etc.), information on customer service type (early Other information and advice are included (speak, speak, proactively explain, etc.). Other information and advice include, for example, information about tableware; information about interiors such as furniture and interiors (colors of floors and walls, etc.); information about graphic designs such as logo colors; advertisements, promotions, product planning (colors of packages, etc.) ); Information on styling tests. These target person information groups can be arranged one-dimensionally according to the strength of their attributes for each tendency.
(適合情報カテゴリー)
 対象者情報群は、任意の1の傾向性によって少なくとも1つの顔部位属性基準値を境にしてその両極の属性に関連する少なくとも2つのカテゴリーに分類される。以下では、このカテゴリーを「適合情報カテゴリー」と呼ぶこととする。顔部位属性基準値が1つの場合には、対象者情報群は、当該部位属性基準値を境にして両極の属性に関連する2つの適合情報カテゴリーに分けられる。例えば、クール-ウォーム傾向性であれば、対象者情報群は一方の極の直線的、ブルー系などのクールな印象を与える属性についての適合情報カテゴリー(クールカテゴリー)と他方の極の曲線的、イエロー系などのウォームな印象を与える属性についての適合情報カテゴリー(ウォームカテゴリー)とに分けられる。またライト-ディープ傾向性であれば、明るい、小さい、軽いなどのライトな(軽快感がある)印象を与える属性についての適合情報カテゴリー(ライトカテゴリー)と暗い、大きい、重いなどのディープな(重厚感がある)印象を与える属性についての適合情報カテゴリー(ディープカテゴリー)とに、またグラデーション-コントラスト傾向性であれば、穏やか、なじむ、マット(つや消し)などのグラデーションのある(穏やかな色の印象を与える)属性についての情報群の範囲(グラデーションカテゴリー)と、鮮やか、目立つ、ツヤ有などのコントラストのある(鮮やかな印象を与える)属性についての適合情報カテゴリー(コントラストカテゴリー)とにそれぞれ対象者情報群は分けられる。また、顔部位属性基準値が2つ以上ある場合には、これらのうちで最大の顔部位基準値以上の適合情報カテゴリー、2番目の顔部位属性基準値以上、最大の顔部位属性基準値未満の適合情報カテゴリー、3番目の顔部位属性基準値以上、2番目の顔部位属性基準値未満の適合情報カテゴリー、・・・などのように分けられる。
(Compliance information category)
The subject information group is classified into at least two categories related to the attributes of the two poles with at least one face part attribute reference value as a boundary according to any one tendency. Hereinafter, this category is referred to as a “conforming information category”. When the face part attribute reference value is one, the subject information group is divided into two matching information categories related to the attributes of both poles with the part attribute reference value as a boundary. For example, in the case of a cool-warm tendency, the subject information group is linear in one pole, a conforming information category (cool category) for attributes that give a cool impression such as blue, and a curvilinear in the other pole. It can be divided into conformity information categories (warm categories) for attributes that give a warm impression such as yellow. For light-deep propensity, the relevant information category (light category) for attributes that give light (lightness) impressions such as bright, small and light, and deep (heavy) such as dark, large and heavy In addition to the conformity information category (deep category) for attributes that give an impression, and for gradient-contrast tendencies, it has a gradation (gentle color impression) with a gradation such as gentle, familiarity, and matte (matte). (Giving) attribute information range (gradation category) and contrast information category (contrast category) for contrasting attributes (giving a vivid impression) such as vivid, conspicuous, and glossy Are divided. In addition, when there are two or more face part attribute reference values, the relevant information category that is greater than or equal to the maximum face part attribute reference value, the second face part attribute reference value or more, and less than the maximum face part attribute reference value The matching information category, the third face part attribute reference value or more, the second face part attribute reference value less than the matching information category, and so on.
 次に、対象者によって複数の顔部位、顔部位属性が選択される場合など、前記3つの傾向性のうち少なくとも2つの異なる傾向性が得られた場合における当該少なくとも2つの傾向性と適合情報カテゴリーとの関係について説明する。なお、説明を簡便にするために、いずれの傾向性もそこに含まれる顔部位属性基準値を1つとしている。図6は、顔部位属性及び傾向性と適合情報カテゴリーとの関係を示しており、前記3つの傾向性のうち、クール-ウォーム傾向性を横軸に、またライト-ディープ傾向性を縦軸とする直交座標系の一例を示している。また、斜め45°の右上がり及び右下がりの方向にコントラスト-グラデーション傾向性の仮想軸を設けるものとする。この直交座標系において、横軸のクール-ウォーム傾向性及び縦軸のライト-ディープ傾向性はそれぞれの顔部位属性基準値を原点とし、この原点にて互いに交差するように設けられている。そして、この直交座標系に、図示していないが、前記対象者情報群が例えば原点からの属性の強弱などに従い2次元的に配置されている。なお、以下では、この直交座標系を「スタイリングマップ」と呼び、特に第1象限(I)を「ブライトテイスト」、第2象限(II)を「アクアテイスト」、第3象限(III)を「クリスタルテイスト」、第4象限(IV)を「アーステイスト」を呼ぶことにする。 Next, when at least two different tendencies are obtained among the three tendencies, such as when a plurality of face parts and face part attributes are selected by the target person, the at least two tendencies and the matching information category Will be described. In addition, in order to simplify the description, each tendency has a single face part attribute reference value included therein. FIG. 6 shows the relationship between the facial part attribute and tendency and the matching information category. Of the three tendencies, the cool-warm tendency is plotted on the horizontal axis, and the light-deep trend is plotted on the vertical axis. An example of the orthogonal coordinate system is shown. In addition, a virtual axis of contrast-gradient tendency is provided in a 45 ° upward and downward right direction. In this Cartesian coordinate system, the cool-warm tendency on the horizontal axis and the light-deep tendency on the vertical axis are provided so that each face part attribute reference value is the origin and intersects at this origin. In the orthogonal coordinate system, although not shown, the subject information group is two-dimensionally arranged according to the strength of the attribute from the origin, for example. In the following, this orthogonal coordinate system is referred to as a “styling map”, and in particular, the first quadrant (I) is “bright taste”, the second quadrant (II) is “aqua taste”, and the third quadrant (III) is “ The “crystal taste” and the fourth quadrant (IV) will be called “artist”.
 このスタイリングマップでは、横軸のクール-ウォーム傾向性により、第1象限(ブライトテイスト)及び第4象限(アーステイスト)がウォームカテゴリーとなり、第2象限(アクアテイスト)及び第3象限(クリスタルテイスト)がクールカテゴリーとなる。また、縦軸のライト-ディープ傾向性により、第1象限(ブライトテイスト)及び第2象限(アクアテイスト)がライトカテゴリーとなり、第3象限(クリスタルテイスト)及び第4象限(アーステイスト)がクールカテゴリーとなる。また、図6に示すように、グラデーション-コントラスト傾向性により、原点を通る斜め45度の右上がりの仮想軸が含まれる第1象限(ブライトテイスト)及び第3象限(クリスタルテイスト)がグラデーションカテゴリーとなり、原点を通る斜め45度の右下がりの仮想軸が含まれる第2象限(アクアテイスト)及び第4象限(アーステイスト)がコントラストカテゴリーとなる。なお、本実施形態における前記各傾向性と適合情報カテゴリーとの関連性を説明するのにこのスタイリングマップが好適であるため、これを用いて説明したが、傾向性と適合情報カテゴリーとの関係は、このようなスタイリングマップを用いる代わりに他の方式や構造を用いて示すこともでき、スタイリングマップを用いることに限定されない。。 In this styling map, the first quadrant (bright taste) and the fourth quadrant (artist) become the warm category, and the second quadrant (aqua taste) and third quadrant (crystal taste) due to the cool-warm tendency on the horizontal axis. Is a cool category. In addition, due to the light-deep tendency on the vertical axis, the first quadrant (bright taste) and the second quadrant (aqua taste) are in the light category, and the third quadrant (crystal taste) and the fourth quadrant (artist) are in the cool category. It becomes. In addition, as shown in FIG. 6, the first quadrant (bright taste) and the third quadrant (crystal taste), which include a 45-degree diagonally upward virtual axis passing through the origin, are gradation categories due to the gradation-contrast tendency. The second quadrant (aqua taste) and the fourth quadrant (artist), which include a hypothetical axis that is 45 degrees diagonally downward and passes through the origin, are the contrast category. In addition, since this styling map is suitable for explaining the relationship between the respective tendencies and the matching information category in the present embodiment, the relationship between the propensity and the matching information category has been described. Instead of using such a styling map, it can also be shown using other methods and structures, and is not limited to using a styling map. .
 図7は、図5に示した3つの傾向性によって分類される適合情報カテゴリーと図6に示した第1~4象限のそれぞれとの関係を表形式でまとめたものである。この表に示すように、前記各傾向性のそれぞれの適合情報カテゴリーは第1~4象限のうちの2つの象限の組み合わせで構成される。逆に言えば、第1~4象限のそれぞれは6つの適合情報カテゴリーのうちの3つの一部となる。即ち、第1象限の「ブライトテイスト」は、ライト、ウォーム及びコントラストの各適合情報カテゴリーの一部である。また、第2象限の「アクアテイスト」は、ライト、クール及びグラデーションの各適合情報カテゴリーの一部である。さらにまた、第3象限の「クリスタルテイスト」は、クール、ディープ及びコントラストの各適合情報カテゴリーの一部である。さらにまた、第4象限の「アーステイスト」は、ディープ、ウォーム及びグラデーションの各適合情報カテゴリーの一部である。 FIG. 7 summarizes the relationship between the conformity information categories classified by the three tendencies shown in FIG. 5 and the first to fourth quadrants shown in FIG. 6 in a table format. As shown in this table, each matching information category of each tendency is composed of a combination of two quadrants among the first to fourth quadrants. In other words, each of the first to fourth quadrants is a part of three of the six relevant information categories. That is, the “bright taste” in the first quadrant is a part of each of the matching information categories of light, warm and contrast. Further, the “aqua taste” in the second quadrant is a part of each matching information category of light, cool and gradation. Furthermore, the “Crystal taste” in the third quadrant is part of the cool, deep and contrast fit information categories. Furthermore, the “artist” in the fourth quadrant is a part of the deep, warm, and gradation matching information categories.
 図8は、前記各象限と対象者情報群との関係を示している。この図では、対象者情報群として接客タイプ、商品(スタイリング)及び対象者の性格に関する情報を掲載するが、これらに限定されない。図8に示すように、対象者情報群は、適合情報カテゴリーによって分類される。例えば、適合情報カテゴリーがブライトテイストの場合、商品として「カジュアル感のある物」が、性格として「明るく快活」が、接客タイプとして「早めに声をかける」ことが代表的に挙げられる。また、適合情報カテゴリーがアクアテイストの場合、商品として「肌触りの良い物」が、性格として「穏やかで優しい」が、接客タイプとして「話を聞き出す」ことが代表的に挙げられる。さらにまた、適合情報カテゴリーがクリスタルテイストの場合、商品として「つやがある物」が、性格として「エネルギッシュで行動が早い」が、接客タイプとして「素早い動作で即答する」ことが代表的に挙げられる。さらにまた、適合情報カテゴリーがアーステイストの場合、商品として「深みある物」が、性格として「落着きがあり、じっくり考える」が、接客タイプとして「手順を追って伝える」ことが代表的に挙げられる。 FIG. 8 shows the relationship between each quadrant and the subject information group. In this figure, information on the customer service type, product (styling) and personality of the target person is posted as the target person information group, but is not limited thereto. As shown in FIG. 8, the target person information group is classified by the matching information category. For example, when the relevant information category is a bright taste, “product with a casual feeling” as a product, “bright and cheerful” as a personality, and “speak early” as a customer service type are representatively cited. In addition, when the conformity information category is aqua taste, “good touch” as a product, “gentle and gentle” as a personality, and “listen to talk” as a customer service type are typically cited. Furthermore, when the conformity information category is a crystal taste, it is typically cited that the product is “shiny”, the personality is “energetic and quick”, and the customer service type is “quickly respond with quick action”. . Furthermore, when the conformity information category is “Artist”, “Deep things” as a product, “Peaceful, think carefully” as a personality, and “Provide instructions according to the procedure” as a customer service type are typically cited.
(アンケート情報)
 本実施形態における記憶部3はさらに、対象者の性格や行動パターンを知るための複数の設問と当該各設問に対する複数の回答文とのセットで構成されるアンケート情報を記憶しておくことができる。このアンケート情報の一例を図9に示す。この図に示すように、各設問に対する複数の回答文はそれぞれ、「ブライトテイスト」、「アクアテイスト」、「クリスタルテイスト」及び「アーステイスト」に対応して設けられている。アンケート情報の内容は、設問「性格(あなたの性格)」であれば、例えば「穏やかで優しく人を援助することが好き」、「エネルギッシュで意思決定や行動が早く目的や結果を重視する」、「明るく快活で自分にお完成や感性を大切にする」、「落ち着きがあり何事もじっくり考えて行動する」などの回答文が用意されている。このように、このアンケート情報は、対象者がこの中から自身に該当すると思われるものを選択していくように構成されている。
(Questionnaire information)
The storage unit 3 in the present embodiment can further store questionnaire information including a set of a plurality of questions for knowing the personality and behavior pattern of the target person and a plurality of answer sentences for the questions. . An example of this questionnaire information is shown in FIG. As shown in this figure, a plurality of answer sentences for each question are provided corresponding to “bright taste”, “aqua taste”, “crystal taste”, and “art taste”, respectively. If the content of the questionnaire information is the question "personality (your personality)", for example, "I like helping people calmly and gently", "energetic decision making and action is fast, focus on purpose and results", Answers such as "I am cheerful and cheerful and value my completion and sensibility" and "I am calm and think carefully and act" are prepared. As described above, the questionnaire information is configured so that the target person selects the items that are considered to correspond to him / herself.
 本実施形態における入力部5は、入力端末9とデバイスドライバ6とを含む。入力端末9やデバイスドライバ6などから入力された情報は、内部バス12を介して記憶部3に格納される。入力端末9は、これの操作者(通常は、対象者に一致するが、該対象者に限定されない。)が顔部位や顔部位属性の選択結果を入力し、また後述するアンケート情報の回答文(複数の選択肢)を選択するためのデバイスである。ここで、対象者は、図5に示す顔部位属性の組合せを参考にして顔部位及び顔部位属性を選択することになる。このように少なくとも2つ1組で組合せることで、後述するように適合情報カテゴリーを対象者に見合うようにさらに絞り込むことができ、対象者に対してより適切な対象者情報を提供することができるためである。この入力部5は主に属性情報取得部21として機能する(図2参照)。 The input unit 5 in the present embodiment includes an input terminal 9 and a device driver 6. Information input from the input terminal 9 or the device driver 6 is stored in the storage unit 3 via the internal bus 12. The input terminal 9 inputs the selection result of the face part and the face part attribute by the operator (usually, but not limited to the target person), and answers to questionnaire information to be described later It is a device for selecting (multiple choices). Here, the subject selects the face part and the face part attribute with reference to the combination of the face part attributes shown in FIG. By combining at least two in this way, it is possible to further narrow down the matching information category so as to match the target person as will be described later, and to provide more appropriate target person information to the target person. This is because it can. The input unit 5 mainly functions as the attribute information acquisition unit 21 (see FIG. 2).
 入力端末9は、これを用いて主に選択の指示や必要データの入力が行われる。この入力端末9としては、例えばマウス、ポインティングデバイス、キーボード、タッチパネルなどが挙げられる。入力端末9がマウスやポインティングデバイスの場合、出力部7のディスプレイ10に表示された選択画面上でマウスカーソルやポイントを用いて選択の指示やデータの入力などに使用でき、入力端末9がキーボードの場合は、各種データの入力に使用できる。また、入力端末9がタッチパネルの場合、ディスプレイ10又は別途用意されたディスプレイの画面上の操作入力ボタンを用いて選択の指示やデータの入力を行うことができる。 The input terminal 9 is used mainly to input a selection instruction and necessary data. Examples of the input terminal 9 include a mouse, a pointing device, a keyboard, and a touch panel. When the input terminal 9 is a mouse or a pointing device, the input terminal 9 can be used for a selection instruction or data input using a mouse cursor or a point on the selection screen displayed on the display 10 of the output unit 7. Can be used to input various data. When the input terminal 9 is a touch panel, a selection instruction or data can be input using the operation input button on the display 10 or a display screen prepared separately.
 デバイスドライバ6は、これを介して接続されている測定機器、デジタルカメラ、スキャナー装置や3Dスキャナー装置などの外部機器から後述する顔部位属性測定結果、顔画像データ、3Dスキャニングデータなどの各種データを受信するドライバである。そして、このドライバ6は演算制御部2の指示信号に基づいて、例えば、不図示のデジタルカメラによって撮影された顔画像データ、3Dスキャナーからの3Dスキャニングデータ、所定の属性についての測定結果などの読み込み処理などを実行する。 The device driver 6 receives various data such as a facial part attribute measurement result, face image data, and 3D scanning data, which will be described later, from an external device such as a measurement device, a digital camera, a scanner device, or a 3D scanner device connected via the device driver 6. It is a driver to receive. The driver 6 reads, for example, face image data photographed by a digital camera (not shown), 3D scanning data from a 3D scanner, measurement results for a predetermined attribute, and the like based on an instruction signal from the arithmetic control unit 2. Execute processing.
 本実施形態における出力部7は、ディスプレイ10とプリンタ11とを含む。出力部7は、演算制御部2からの指示信号に基づいて種々のフォーマットやデータをディスプレイ10に表示させ、又はプリンタ11に印字させる。種々のフォーマットには、対象者がその顔部位や顔部位属性を選択するための選択画面や、本実施形態の情報提供装置において選出された対象者情報群及び当該情報群に対応した複数の対象者情報を出力するための出力画面が含まれる。ディスプレイ10としては、液晶モニターやプロジェクタなどが挙げられる。 The output unit 7 in this embodiment includes a display 10 and a printer 11. The output unit 7 displays various formats and data on the display 10 based on the instruction signal from the arithmetic control unit 2 or causes the printer 11 to print. Various formats include a selection screen for the target person to select the face part and face part attribute, a target person information group selected by the information providing apparatus of the present embodiment, and a plurality of targets corresponding to the information group. An output screen for outputting person information is included. Examples of the display 10 include a liquid crystal monitor and a projector.
[演算制御部2]
 次に、演算制御部2の機能について説明する。図2に示すように、演算制御部2は、ここで本発明の情報提供方法を実行するためのプログラム及び各種データが処理されることで、属性情報取得部21、評価部22、適合情報カテゴリー特定部23及び情報選出部24のそれぞれの機能部の処理が実行される。これらの処理は、演算制御部2からの各種指令信号に応じて行われる。なお、前記各部21~24は、説明の便宜上、機能別に名称を付し分類したものであり、ソフトウェア構成を限定するものではない。また、本発明においては、これらによる処理の一部を本実施形態の情報提供装置に実装されたハードウェアで実行する形態も含まれるものとする。
[Calculation control unit 2]
Next, functions of the arithmetic control unit 2 will be described. As shown in FIG. 2, the arithmetic control unit 2 processes the program and various data for executing the information providing method of the present invention, thereby obtaining the attribute information acquisition unit 21, the evaluation unit 22, the conforming information category. Processing of each functional unit of the specifying unit 23 and the information selection unit 24 is executed. These processes are performed according to various command signals from the arithmetic control unit 2. The units 21 to 24 are classified by name according to function for convenience of explanation, and do not limit the software configuration. In addition, the present invention includes a form in which a part of the processing by these is executed by hardware mounted on the information providing apparatus of the present embodiment.
[属性情報取得部21]
 属性情報取得部21は、対象者による入力部5の入力装置の操作による選択顔部位や顔部位属性の選択結果や、不図示の外部機器(キーボード、タッチパネル、スキャナーなど)からデバイスドライバ6を介して送られた少なくとも1つの顔部位属性測定結果(測定値)などを取得する機能を備えている。
[Attribute information acquisition unit 21]
The attribute information acquisition unit 21 receives the selection result of the selected face part and face part attribute by the operation of the input device of the input unit 5 by the target person, or the external device (keyboard, touch panel, scanner, etc.) (not shown) via the device driver 6. A function of acquiring at least one facial part attribute measurement result (measurement value) sent in the
 具体的には、例えば前記選択顔部位が眉、目、鼻及び口のうちの少なくとも1つであり、顔部位属性が形・大きさである場合、属性情報取得部21は当該各選択顔部位における所定の寸法を顔部位属性測定結果として取得して前記評価部に送る。この場合、顔部位属性測定結果には前記対象者について基準顔サイズと同様にして求められた顔サイズLm、Mm、Nm又はMm・Nm(説明の便宜上、基準顔サイズのそれぞれに対応させてLm、Mm、Nmなどとmを添えて表示する。)が含まれる。 Specifically, for example, when the selected face part is at least one of eyebrows, eyes, nose, and mouth, and the face part attribute is a shape / size, the attribute information acquisition unit 21 selects each selected face part. Is acquired as a facial part attribute measurement result and sent to the evaluation unit. In this case, the face part attribute measurement result includes the face size Lm, Mm, Nm or Mm · Nm obtained in the same manner as the reference face size for the subject (for convenience of explanation, Lm corresponding to each of the reference face sizes) , Mm, Nm, etc., and m).
 前記選択顔部位が肌、瞳、頭髪のうちの少なくとも1つであり、前記顔部位属性が色である場合、属性情報取得部21は当該各選択顔部位における色彩値を顔部位属性測定結果として取得して評価部22に送る。また、前記選択顔部位が目であり、前記顔部位属性が色である場合、属性情報取得部21は当該選択顔部位における白目部分及び黒目部分のそれぞれについて測定された色彩値を顔部位属性測定結果として取得して評価部22に送る。 When the selected face part is at least one of skin, pupil, and hair, and the face part attribute is color, the attribute information acquisition unit 21 uses the color value in each selected face part as a face part attribute measurement result. Obtain it and send it to the evaluation unit 22. When the selected face part is an eye and the face part attribute is a color, the attribute information acquisition unit 21 calculates the color value measured for each of the white eye part and the black eye part in the selected face part. Obtained as a result and sent to the evaluation unit 22.
 また、前記選択顔部位が肌であり、前記顔部位属性が質感である場合、属性情報取得部21は当該選択顔部位における油分量の測定値を顔部位属性測定結果として取得して評価部22に送る。なお、肌の質感の評価方法としては、肌の油分量には限定されず、例えば肌水分量や油分量と水分量との比率などのその他の属性を採用することもできる。さらにまた、前記選択顔部位が頭髪であり、前記顔部位属性が質感である場合、属性情報取得部21は当該選択顔部位における頭髪の直径の測定値を顔部位属性測定結果として取得して評価部22に送る。 When the selected face part is skin and the face part attribute is texture, the attribute information acquisition unit 21 acquires a measurement value of the oil amount in the selected face part as a face part attribute measurement result and evaluates the part 22. Send to. Note that the skin texture evaluation method is not limited to the amount of oil in the skin, and other attributes such as the amount of skin moisture and the ratio between the amount of oil and the amount of moisture can also be employed. Furthermore, when the selected face part is hair and the face part attribute is texture, the attribute information acquisition unit 21 acquires and evaluates the measured value of the diameter of the hair in the selected face part as a face part attribute measurement result. Send to part 22.
 属性情報取得部21はまた、顔画像データや3Dスキャニングデータを取得することもできる。そのため、属性情報取得部21は、取得した顔画像データや3Dスキャニングデータを解析するのに画像データ解析部25を備えることができる。顔画像データなどは、対象者自身のものであってもよく、対象者以外の者であってもよい。対象者又は本発明の情報提供装置1の操作者による顔部位及び顔部位属性の選択結果に応じて、画像データ解析部25では、取得した顔画像データなどから従来公知の方法で選択顔部位を検出した上で寸法など所定の属性の測定を行い、その結果を顔部位属性測定結果とする。なお、最終的には、本発明の情報提供装置1は、前者の場合には、対象者自身についての情報を提供し、後者の場合には当該対象者以外の者についての情報を提供することになる。 The attribute information acquisition unit 21 can also acquire face image data and 3D scanning data. Therefore, the attribute information acquisition unit 21 can include an image data analysis unit 25 to analyze the acquired face image data and 3D scanning data. The face image data or the like may be the subject's own or a person other than the subject. In accordance with the selection result of the face part and the face part attribute by the target person or the operator of the information providing apparatus 1 of the present invention, the image data analyzing unit 25 selects the selected face part from the acquired face image data by a conventionally known method. After detection, a predetermined attribute such as a dimension is measured, and the result is set as a facial part attribute measurement result. In the end, the information providing apparatus 1 of the present invention provides information about the target person in the former case, and provides information about a person other than the target person in the latter case. become.
 以下、2次元の顔画像データの場合について述べると、例えば、前記選択顔部位が眉、目、鼻及び口のうちの少なくとも1つで、かつ顔部位属性が形・大きさであれば、画像データ解析部25は属性情報取得部21から送られた顔画像データについて前記選択顔部位の形・大きさに係る所定の寸法を求めて顔部位属性測定結果とする。 Hereinafter, in the case of two-dimensional face image data, for example, if the selected face part is at least one of eyebrows, eyes, nose, and mouth, and the face part attribute is shape / size, an image The data analysis unit 25 obtains a predetermined dimension related to the shape and size of the selected face part for the face image data sent from the attribute information acquisition part 21 and uses it as a face part attribute measurement result.
 また、前記選択顔部位が肌であり、前記顔部位属性が色である場合、画像データ解析部25は顔画像データより頭髪の部分を除く肌の領域の色彩値を測色して顔部位属性測定結果とする。さらにまた、前記選択顔部位が頭髪又は瞳(黒目部分又は瞳孔)であり、前記顔部位属性が色である場合、画像データ解析部25は顔画像データより頭髪の領域又は瞳の色彩値を測色して顔部位属性測定結果とする。さらにまた、前記選択顔部位が目であり、前記顔部位属性が色である場合、画像データ解析部25は顔画像データより選択顔部位における白目部分及び黒目部分のそれぞれについて色彩値を測色して顔部位属性測定結果とする。なお、測色は、公知の測色方法によって行うことができる。また、測色の際の頭髪の領域や目(白目部分、黒目部分の双方を含む)における測色位置は、これらの領域のそれぞれを代表するように公知の方法により設定できる。代表値を得られるのであれば、1点のみの測色であってもよく、多点を測色しそれらの平均値を求めるようにしてもよい。 When the selected face part is skin and the face part attribute is color, the image data analysis unit 25 measures the color value of the skin area excluding the hair part from the face image data to measure the face part attribute. The measurement result. Furthermore, when the selected face part is hair or pupil (black eye part or pupil) and the face part attribute is color, the image data analysis unit 25 measures the color value of the hair region or pupil from the face image data. Color the face part attribute measurement result. Furthermore, when the selected face part is an eye and the face part attribute is a color, the image data analysis unit 25 measures color values for each of the white eye part and the black eye part in the selected face part from the face image data. And the measurement result of the facial part attribute. Color measurement can be performed by a known color measurement method. Further, the colorimetric position in the hair region and the eyes (including both the white eye part and the black eye part) at the time of color measurement can be set by a known method so as to represent each of these areas. As long as a representative value can be obtained, the color measurement may be performed for only one point, or multiple points may be measured to obtain an average value thereof.
 さらにまた、選択顔部位が頭髪であり、顔部位属性が質感である場合、画像データ解析部25は顔画像データ(この場合、3Dスキャニングデータの方が好ましい。)における当該選択顔部位における頭髪の直径を測定し、これを顔部位属性測定結果とする。さらにまた、この場合、画像データ解析部25は公知の画像解析手法を用いて頭髪の断面形状を取得し、これを顔部位属性測定結果とする。頭髪の断面形状を用いるのは、この形状が直毛や縮れ毛形成の一要因であり、直毛か否かが頭髪の質感に大きく影響するからである。
 これら各顔部位属性測定結果は、評価部22に送られる。
Furthermore, when the selected face part is hair and the face part attribute is texture, the image data analysis unit 25 uses the hair of the selected face part in the face image data (in this case, 3D scanning data is preferable). The diameter is measured, and this is used as the facial part attribute measurement result. Furthermore, in this case, the image data analysis unit 25 acquires a cross-sectional shape of the hair using a known image analysis method, and uses this as a facial part attribute measurement result. The cross-sectional shape of the hair is used because this shape is a factor in the formation of straight hairs and curly hairs, and whether the hair is straight or not has a great influence on the texture of the hair.
These face part attribute measurement results are sent to the evaluation unit 22.
[評価部22]
 評価部22は、属性情報取得部21から送られた顔部位属性測定結果と、後述する記憶部3から呼び出した顔部位属性基準値とを対比し、その対比結果を適合情報カテゴリー特定部23(後述)に送るように構成されている。例えば前記選択顔部位が眉、目、鼻及び口のうちの少なくとも1つであり、顔部位属性が形・大きさである場合、評価部22は属性情報取得部21から送られた各寸法(画像データ解析部25における測定値も含む。)と記憶部3から呼び出したそれぞれの寸法の基準値とを対比する。このとき、評価部22は、後述する記憶部3から基準顔サイズL、M、N又はM・N(図4参照)を呼び出し、属性情報取得部21から送られた、対象者について基準顔サイズと同様の測定により得られた顔サイズLm、Mm、Nm又はMm・Nmで基準顔サイズL、M、N又はM・Nをそれぞれ除して補正値(L/Lm、M/Mm、N/Nm又はM・N/(Mm・Nm))を求め、取得した対象者の顔部位属性測定結果のうち前記顔サイズを除く各測定値に当該補正値を乗じるように構成されている。
[Evaluation unit 22]
The evaluation unit 22 compares the face part attribute measurement result sent from the attribute information acquisition unit 21 with a face part attribute reference value called from the storage unit 3 to be described later, and compares the comparison result with the matching information category specifying unit 23 ( (To be described later). For example, when the selected face part is at least one of eyebrows, eyes, nose, and mouth, and the face part attribute is shape / size, the evaluation unit 22 sends each dimension ( The measured value in the image data analyzing unit 25 is also included) and the reference value of each dimension called from the storage unit 3 is compared. At this time, the evaluation unit 22 calls a reference face size L, M, N, or M · N (see FIG. 4) from the storage unit 3 to be described later, and the reference face size for the subject sent from the attribute information acquisition unit 21. By dividing the reference face size L, M, N or M · N by the face size Lm, Mm, Nm or Mm · Nm obtained by the same measurement as above, correction values (L / Lm, M / Mm, N / Nm or M · N / (Mm · Nm)) is obtained, and among the acquired face part attribute measurement results of the subject, each measurement value excluding the face size is multiplied by the correction value.
 以下、顔部位及び顔部位属性のそれぞれに係る評価部22における評価内容について具体的に説明する。顔部位が眉の場合、顔部位属性「形・大きさ」として「直線(的)-曲線(的)」及び「細い-太い」という2つの属性がある(図5参照)。ここで、前者の属性(「直線(直線的)-曲線(曲線的)」)が選択されている場合には、図10(a)に示すように、評価部22が取得する顔部位属性測定結果(測定値)は眉の上端縁で近似される円弧(アール)の半径A(単位はcm)であり、記憶部3から呼び出される顔部位属性基準値は5cmに設定されている。対象者についての常法による半径Aの測定値が、例えば補正値(M・N/(Mm・Nm)。以下同様。)を乗じた後に、この顔部位属性基準値よりも大きい10cmの場合には、対象者の目の形状は直線的な横長であるので、「直線」的となり、前記顔部位属性基準値よりも小さい3cmの場合には、高さがあり幅が狭いので、「曲線」的となる。即ち、この場合、対象者についての顔部位属性測定結果は、「直線-曲線」ライン(図6に示すスタイリングマップ(直交座標系)の横軸(クール-ウォーム傾向性)に相当)上、顔部位属性基準値5cmを境(原点)にして「直線」側、又は「曲線」側に属することになる。 Hereinafter, the evaluation contents in the evaluation unit 22 relating to each of the face part and the face part attribute will be specifically described. When the face part is an eyebrow, the face part attribute “shape / size” has two attributes of “straight (target) -curve (target)” and “thin-thick” (see FIG. 5). Here, when the former attribute (“straight (linear) —curve (curve)”) is selected, as shown in FIG. 10A, the facial part attribute measurement acquired by the evaluation unit 22 is performed. The result (measured value) is a radius A (unit: cm) of an arc approximated by the upper edge of the eyebrows, and the face part attribute reference value called from the storage unit 3 is set to 5 cm. In the case where the measured value of the radius A according to the normal method for the subject is, for example, 10 cm larger than this facial part attribute reference value after being multiplied by a correction value (M · N / (Mm · Nm); the same applies hereinafter) Since the shape of the eye of the subject is a straight landscape, it becomes “straight”, and in the case of 3 cm smaller than the face part attribute reference value, the height and width are narrow, so the “curve” It becomes the target. That is, in this case, the facial part attribute measurement result for the subject is the “straight-curve” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map (orthogonal coordinate system) shown in FIG. 6) on the face It belongs to the “straight line” side or the “curve” side with the site attribute reference value 5 cm as the boundary (origin).
 また、後者の属性(「細い-太い」)が選択されている場合には、評価部22が取得する顔部位属性測定結果(測定値)は、図10(b)に示すように、左右の揉み上げ間の距離に上下距離を乗じた積M・N(図4参照)に対する目頭側の眉の幅の割合B(単位は%)であり、顔部位属性の基準値は0.03%に設定されている。常法による対象者の幅Bの測定値が、前記と同様、補正値(M・N/(Mm・Nm))を乗じた後に、基準値0.03%より小さい0.02%の場合には「細い」側に、基準値0.03%より大きい0.05%の場合には、「太い」側に属する。即ち、この場合、対象者についての顔部位属性測定結果は、「細い-太い」ライン(図6に示すスタイリングマップの縦軸(ライト-ディープ傾向性)に相当)上、顔部位属性基準値0.03%を境(原点)にして「細い」側、又は「太い」側に属することになる。 When the latter attribute (“thin-thick”) is selected, the facial part attribute measurement results (measurement values) acquired by the evaluation unit 22 are as shown in FIG. This is the ratio B (unit:%) of the width of the eyebrow on the eye side to the product M · N (see Fig. 4) obtained by multiplying the distance between the strokes by the vertical distance, and the reference value for the facial part attribute is set to 0.03% Has been. In the case where the measured value of the width B of the subject by the usual method is 0.02% which is smaller than the reference value 0.03% after being multiplied by the correction value (M · N / (Mm · Nm)) as described above. Belongs to the “thin” side when it is 0.05% larger than the reference value of 0.03%. That is, in this case, the facial part attribute measurement result for the subject is the face part attribute reference value 0 on the “thin-thick” line (corresponding to the vertical axis (light-deep tendency) of the styling map shown in FIG. 6). It belongs to the “thin” side or “thick” side with 0.03% as the boundary (origin).
 顔部位が目である場合、顔部位属性「形・大きさ」には「直線(直線的)か曲線(曲線的)か」及び「小さい-大きい」という2つの属性がある(図5参照)。前者の属性(「直線(的)-曲線(的)」)が選択されている場合、顔部位属性測定結果(測定値)は、図11(a)に示すように、積M・N(図4参照)に対するに対する目の縦方向の高さの割合C(単位は%)であり、この顔部位属性の基準値は0.05%に設定されている。常法による対象者の上記割合Cの測定値が、補正値(M・N/(Mm・Nm))を乗じた後に、基準値0.05%よりも小さい0.03%の場合には、目の形状は横長であるので、「直線」的となり、大きい0.08%の場合には、高さがあるので、「曲線」的となる。即ち、この場合の対象者についての顔部位属性測定結果は、「直線-曲線」ライン(図6に示すスタイリングマップの横軸(クール-ウォーム傾向性)に相当)上、顔部位属性基準値0.05%を境(原点)にして「直線」側、又は「曲線」側に属することになる。 When the face part is an eye, the face part attribute “shape / size” has two attributes of “straight (linear) or curved (curve)” and “small-large” (see FIG. 5). . When the former attribute (“straight (target) —curve (target)”) is selected, the facial part attribute measurement result (measured value) is the product M · N (see FIG. 11A). 4)), and the reference value of the facial part attribute is set to 0.05%. When the measured value of the above-mentioned ratio C of the subject by the usual method is 0.03% smaller than the reference value 0.05% after being multiplied by the correction value (M · N / (Mm · Nm)), Since the shape of the eye is horizontally long, it is like a “straight line”, and in the case of a large 0.08%, there is a height, so it is like a “curve”. That is, the facial part attribute measurement result for the subject in this case is the face part attribute reference value 0 on the “straight-curve” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6). .05% as a boundary (origin) and belonging to the “straight line” side or “curve” side.
 また、後者の属性(「小さい-大きい」)が選択されている場合、顔部位属性測定結果(測定値)は、図11(b)に示すように、積M・N(図4参照)に対する目の横方向の幅の割合D(単位は%)であり、この顔部位属性基準値は0.2%に設定されている。対象者の目の幅の割合Dの測定値が、補正値(M・N/(Mm・Nm))を乗じた後に、基準値(0.2%)よりも小さい0.18%の場合には「小さい」側に、大きい0.25%の場合には「大きい」側に属する。即ち、この場合の対象者についての顔部位属性測定結果は、「小さい-大きい」ライン(図6に示すスタイリングマップの縦軸(ライト-ディープ傾向性)に相当)上、顔部位属性基準値0.2%を境(原点)にして「小さい」側か「大きい」側かに属することになる。 Further, when the latter attribute (“small-large”) is selected, the facial part attribute measurement result (measured value) corresponds to the product M · N (see FIG. 4) as shown in FIG. It is the ratio D (unit:%) of the lateral width of the eye, and this facial part attribute reference value is set to 0.2%. When the measured value of the eye width ratio D of the subject is 0.18% smaller than the reference value (0.2%) after being multiplied by the correction value (M · N / (Mm · Nm)) Belongs to the “small” side, and in the case of a large 0.25%, it belongs to the “large” side. That is, the facial part attribute measurement result for the subject in this case is the face part attribute reference value 0 on the “small-large” line (corresponding to the vertical axis (light-deep tendency) of the styling map shown in FIG. 6). .2% of the border (origin) belongs to the “small” side or the “large” side.
 顔部位が鼻である場合には、目の場合と同様、顔部位属性「形・大きさ」として「直線(的)-曲線(的)」及び「小さい-大きい」という2つの属性がある(図5参照)。前者の属性(「直線-曲線」)が選択されている場合には、顔部位属性測定結果(測定値)は、図12(a)に示すように、小鼻の膨らみを近似する円弧(アール)の半径E(単位はcm)であり、その顔部位属性基準値は1cmに設定されている。対象者の半径Eの測定値が、補正値(M・N/(Mm・Nm))を乗じた後に、この顔部位属性基準値1cmよりも小さい0.5cmの場合には、「曲線」的となり、大きい2cmの場合には、「直線」的となる。即ち、この場合の対象者についての顔部位属性測定結果は、「曲線-直線」ライン(図6に示すスタイリングマップの横軸(クール-ウォーム傾向性)に相当)上、顔部位属性基準値1cmを境(原点)にして「直線」側か「曲線」側かに属することになる。 When the face part is the nose, the face part attribute “shape / size” has two attributes “straight (target) —curve (target)” and “small—large” as in the case of the eyes ( (See FIG. 5). When the former attribute (“straight-curve”) is selected, the facial part attribute measurement result (measured value) is an arc that approximates the swelling of the nose as shown in FIG. Radius E (unit: cm), and the face part attribute reference value is set to 1 cm. If the measured value of the radius E of the subject is 0.5 cm, which is smaller than the face part attribute reference value 1 cm after being multiplied by the correction value (M · N / (Mm · Nm)), it is “curved” In the case of a large 2 cm, it becomes “linear”. That is, the facial part attribute measurement result for the subject in this case is the “curve-straight line” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6), and the facial part attribute reference value 1 cm. It belongs to either the “straight line” side or the “curve” side with the border (origin).
 また、後者の属性(「小さい-大きい」)が選択されている場合、顔部位属性測定結果(測定値)は、図12(b)に示すように、積M・N(図4参照)に対する鼻の横方向における幅の割合F(単位は%)で表し、その基準値は0.2%に設定されている。対象者の割合Fの測定値が、補正値(M・N/(Mm・Nm))を乗じた後に、この顔部位属性基準値0.2%よりも小さい0.18%の場合には「小さい」側に、大きい0.25%の場合には「大きい」側に属する。即ち、この場合の対象者についての顔部位属性測定結果は、「小さい-大きい」ライン(図6に示すスタイリングマップの縦軸(ライト-ディープ傾向性)に相当)上、顔部位属性基準値0.2%を境(原点)にして「小さい」側、又は「大きい」側に属することになる。 When the latter attribute (“small-large”) is selected, the facial part attribute measurement result (measured value) is the product M · N (see FIG. 4) as shown in FIG. It is expressed by a width ratio F (unit:%) in the lateral direction of the nose, and its reference value is set to 0.2%. When the measured value of the ratio F of the subject is 0.18%, which is smaller than the facial part attribute reference value 0.2%, after being multiplied by the correction value (M · N / (Mm · Nm)), “ If it is 0.25% on the “small” side, it belongs to the “large” side. That is, the facial part attribute measurement result for the subject in this case is the face part attribute reference value 0 on the “small-large” line (corresponding to the vertical axis (light-deep tendency) of the styling map shown in FIG. 6). It belongs to the “small” side or the “large” side with a boundary (origin) of 2%.
 顔部位が口である場合、目や鼻の場合と同様、顔部位属性「形・大きさ」には「直線(的)-曲線(的)」及び「小さい-大きい」という2つの属性がある(図5参照)。前者の属性(「直線か曲線か」)が選択されている場合には、顔部位属性測定結果(測定値)は、図12(a)に示すように、積M・N(図4参照)に対する口の縦方向の高さの割合G(単位は%)で表し、0.1%をその顔部位属性の基準値とする。対象者の割合Gの測定値が、補正値(M・N/(Mm・Nm))を乗じた後に、この基準値0.1%よりも小さい0.08%の場合には、口の形状は横長であるので、「直線」的となり、大きい0.15%の場合には、高さがあり幅が狭いので、「曲線」的となる。即ち、この場合の対象者についての顔部位属性測定結果は、「曲線-直線」ライン(図6に示すスタイリングマップの横軸(クール-ウォーム傾向性)に相当)上、顔部位属性基準値0.1%を境(原点)にして「直線」側又は「曲線」側に属することになる。 When the face part is the mouth, the face part attribute “shape / size” has two attributes, “straight (target) —curve (target)” and “small—large”, as in the case of the eyes and nose. (See FIG. 5). When the former attribute (“straight or curved”) is selected, the facial part attribute measurement result (measured value) is the product M · N (see FIG. 4) as shown in FIG. The height ratio G of the mouth in the vertical direction (unit:%), and 0.1% is the reference value of the face part attribute. If the measured value of the subject ratio G is multiplied by the correction value (M · N / (Mm · Nm)) and is 0.08% smaller than the reference value 0.1%, the shape of the mouth Since it is horizontally long, it is like a “straight line”, and when it is 0.15%, it is like a “curve” because it has a height and is narrow. That is, the facial part attribute measurement result for the subject in this case is the face part attribute reference value 0 on the “curve-straight line” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6). It belongs to the “straight line” side or “curve” side with 1% as the boundary (origin).
 また、後者の属性(「小さい-大きい」)が選択されている場合には、顔部位属性測定結果(測定値)は、図12(b)に示すように、積M・N(図4参照)に対する口の横方向における幅の割合H(単位は%)であり、その基準値は0.25%に設定されている。対象者の割合Hの測定値が、補正値(M・N/(Mm・Nm))を乗じた後に、この顔部位属性基準値0.25%よりも小さい0.23%の場合には「小さい」側に、大きい0.3%の場合には「大きい」側に属する。即ち、この場合の対象者についての顔部位属性測定結果は、「小さい-大きい」ライン(図6に示すスタイリングマップの横軸(ライト-ディープ傾向性)に相当)上、顔部位属性基準値0.25%を境(原点)にして「小さい」側又は「大きい」側に属することになる。 When the latter attribute (“small-large”) is selected, the facial part attribute measurement result (measured value) is the product M · N (see FIG. 4), as shown in FIG. ) Of the width of the mouth in the horizontal direction (unit:%), and the reference value is set to 0.25%. When the measured value of the ratio H of the subject is 0.23% which is smaller than 0.25% of the facial part attribute reference value after being multiplied by the correction value (M · N / (Mm · Nm)), “ If it is 0.3%, it belongs to the “large” side. That is, the facial part attribute measurement result for the subject in this case is the face part attribute reference value 0 on the “small-large” line (corresponding to the horizontal axis (light-deep tendency) of the styling map shown in FIG. 6). It belongs to the “small” side or “large” side with 25% as the boundary (origin).
 また、選択顔部位が肌であり、顔部位属性が色である場合、評価部22は顔部位属性測定結果を後述する記憶部3から呼び出した色彩値の基準値と対比する。この場合の顔部位属性には、「イエロー(系)-ブルー(系)」及び「明るい-暗い」という2つの属性がある(図5参照)。いずれの属性(顔部位属性)についても、顔部位属性測定結果(測定値)は肌の色を代表する箇所を常法を用いて測色した色彩値であり、その基準値はCIE Lab色彩値を示している。なお、以下では、色彩値としてCIE Lab色彩値を用いた例について説明するが、これに限定されず、その他の色彩値、例えばハンターLab、RGB、CMYK、XYZ又はLchなどを使用することもできる。 If the selected face part is skin and the face part attribute is color, the evaluation unit 22 compares the face part attribute measurement result with the reference value of the color value called from the storage unit 3 to be described later. In this case, the face part attributes include two attributes of “yellow (system) -blue (system)” and “bright-dark” (see FIG. 5). For any attribute (face part attribute), the face part attribute measurement result (measured value) is a color value obtained by measuring a part representing the skin color using a conventional method, and the reference value is a CIE Lab color value. Is shown. In the following, an example in which CIE Lab color values are used as color values will be described. However, the present invention is not limited to this, and other color values such as Hunter Lab, RGB, CMYK, XYZ, or Lch can also be used. .
 選択顔部位が肌であり、前者の属性「イエロー(系)-ブルー(系)」が選択されている場合、顔部位属性基準値は、L=+62.5、a=-1.9、b=+11.8である。対象者について測色したa値及びb値がそれぞれ、+10.2及び-7.6であれば、前記顔部位属性基準値よりも赤味がかった青色である「ブルー系」となる。また、測色したa値及びb値がそれぞれ83.5、+11.3及び+10.9であれば、前記顔部位属性基準値よりも赤味がかった黄色である「イエロー系」となる(2つの色彩値の差(色差)に関する判定は常法に従う。以下同様。)。即ち、この場合の対象者についての顔部位属性測定結果は、「イエロー-ブルー」ライン(図6に示すスタイリングマップの横軸(クール-ウォーム傾向性)に相当)上、顔部位属性基準値を境(原点)にして「イエロー」側又は「ブルー」側に属することになる。 When the selected face part is skin and the former attribute “yellow (system) -blue (system)” is selected, the face part attribute reference values are L = + 62.5, a = −1.9, b = + 11.8. If the a and b values measured for the subject are +10.2 and -7.6, respectively, the color becomes a “blue system” that is a reddish blue than the face part attribute reference value. Further, if the measured a value and b value are 83.5, +11.3, and +10.9, respectively, a “yellow system” that is reddish yellow than the face part attribute reference value (2). Judgment on the difference between two color values (color difference) follows a conventional method, and so on.) That is, the measurement result of the facial part attribute for the subject in this case is the face part attribute reference value on the “yellow-blue” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6). The boundary (origin) belongs to the “yellow” side or the “blue” side.
 選択顔部位が肌であり、後者の属性「明るい-暗い」が選択されている場合、顔部位属性基準値は、前記と同様、L=+62.5、a=-1.9、b=+11.8である。対象者について測色したL値が+87.3であれば、前記顔部位属性基準値L=+62.5よりも明るく、測色したL値が+57.7であれば、前記顔部位属性基準値L=+62.5よりも暗くなる。即ち、この場合の対象者についての顔部位属性測定結果は、「明るい-暗い」ライン(図6に示すスタイリングマップの横軸(ライト-ディープ傾向性)に相当)上、顔部位属性基準値を境(原点)にして「明るい」側又は「暗い」側に属することになる。 When the selected face part is skin and the latter attribute “bright-dark” is selected, the face part attribute reference values are L = + 62.5, a = −1.9, b = + 11, as described above. .8. If the L value measured for the subject is +87.3, it is brighter than the facial part attribute reference value L = + 62.5, and if the measured L value is +57.7, the facial part attribute reference value Darker than L = + 62.5. That is, the measurement result of the facial part attribute for the subject in this case is the face part attribute reference value on the “bright-dark” line (corresponding to the horizontal axis (light-deep tendency) of the styling map shown in FIG. 6). The boundary (origin) belongs to the “bright” side or “dark” side.
 また、選択顔部位が頭髪又は瞳(瞳孔)であり、顔部位属性が色である場合もまた、評価部22は顔部位属性測定結果を後述する記憶部3から呼び出した色彩値の基準値(顔部位属性基準値)と対比する。この場合の顔部位属性としては、「イエロー(系)-ブルー(系)」及び「明るい-暗い」という2つの属性がある(図5参照)。前者の「イエロー(系)-ブルー(系)」が選択されている場合、顔部位属性基準値は、L=+31.9、a=+5.5、b=+7.2である。対象者について測色したa値及びb値がそれぞれ、+0.0及び+0.4であれば、前記顔部位属性基準値よりも僅かに青味を感じる「ブルー(系)」となる。また、測色したa値及びb値がそれぞれ+7.5及び+6.7であれば、前記顔部位属性基準値よりも赤味及び黄味が強い「イエロー(系)」となる。即ち、この場合の対象者についての顔部位属性測定結果は、「イエロー-ブルー」ライン(図6に示すスタイリングマップの横軸(クール-ウォーム傾向性)に相当)上、顔部位属性基準値を境(原点)にして「イエロー」側又は「ブルー」側に属することになる。 Also, when the selected face part is the hair or the pupil (pupil) and the face part attribute is color, the evaluation unit 22 also calls the reference value of the color value called from the storage unit 3 (to be described later) Contrast with face part attribute reference value). In this case, the face part attribute includes two attributes of “yellow (system) -blue (system)” and “bright-dark” (see FIG. 5). When the former “yellow (system) -blue (system)” is selected, the face part attribute reference values are L = + 31.9, a = + 5.5, and b = + 7.2. If the a and b values measured for the subject are +0.0 and +0.4, respectively, “blue (system)” is felt slightly more bluish than the face part attribute reference value. Further, if the measured a value and b value are +7.5 and +6.7, respectively, “yellow (system)” having redness and yellowishness stronger than the face part attribute reference value. That is, the measurement result of the facial part attribute for the subject in this case is the face part attribute reference value on the “yellow-blue” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6). The boundary (origin) belongs to the “yellow” side or the “blue” side.
 選択顔部位が頭髪又は目の黒目部分であり、後者の「明るい-暗い」が選択されている場合、顔部位属性基準値は、L=+62.5、a=-1.9、b=+11.8であり、前記と同様である。対象者について測色したL値が+87.3であれば、前記顔部位属性基準値L=+62.5よりも明るくなり、測色したL値が+57.7であれば、前記顔部位属性基準値L=+62.5よりも暗くなる。即ち、この場合の対象者についての顔部位属性測定結果は、「明るい-暗い」ライン(図6に示すスタイリングマップの横軸(ライト-ディープ傾向性)に相当)上、顔部位属性基準値を境(原点)にして「明るい」側又は「暗い」側に属することになる。 When the selected face part is the hair or the black eye part of the eye and the latter “bright-dark” is selected, the face part attribute reference values are L = + 62.5, a = −1.9, b = + 11 .8, which is the same as described above. If the L value measured for the subject is +87.3, it becomes brighter than the face part attribute reference value L = + 62.5, and if the measured L value is +57.7, the face part attribute reference Darker than the value L = + 62.5. That is, the measurement result of the facial part attribute for the subject in this case is the face part attribute reference value on the “bright-dark” line (corresponding to the horizontal axis (light-deep tendency) of the styling map shown in FIG. 6). The boundary (origin) belongs to the “bright” side or “dark” side.
 また、選択顔部位が目であり、顔部位属性が色の「穏やか-鮮やか」である場合、評価部22は白目部分及び黒目部分の色彩値の差を求めて、これと前記記憶部から呼び出した前記色彩値の差の基準値と対比する。この場合、顔部位属性基準値(彩度)は、白目部分にてL=+94.5、a=+1.4、b=+7.6、黒目部分にてL=+31.9、a=+5.5、b=+7.2であるので、次式よりc=1.33となる。 If the selected face part is an eye and the face part attribute is “gentle-bright” in color, the evaluation unit 22 obtains a difference in color value between the white-eye part and the black-eye part and calls it from the storage part. Also, it is compared with the reference value of the difference in color value. In this case, the face part attribute reference value (saturation) is L = + 94.5, a = + 1.4, b = + 7.6, and L = + 31.9, a = + 5. Since 5, b = + 7.2, c = 1.33 from the following equation.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
対象者の目について測色した色彩値が、白目部分でL=+94.5、a=+1.4、b=+7.6、黒目部分がL=+42.2、a=+6.4、b=+1.6であれば、上式からc=1.1となり、基準値1.33より小さいので穏やかになる。また、対象者の目の色彩値が白目部分でL=+95.1、a=+1.7、b=-2.8、黒目部分がL=+14.1、a=-0.6、b=+0.7であれば、上式からc=2.36となり、基準値1.33よりも大きいので鮮やかとなる。この場合の顔部位属性測定結果は、評価が鮮やかである場合、図6に示す直交座標系において原点を通る右上がりの斜め45度の仮想軸(コントラスト傾向性)に相当)が含まれる第1象限及び第3象限に属することになる。また、評価が穏やかな場合、前記直交座標系において原点を通る右下がりの斜め45度の仮想軸(グラデーション傾向性)が含まれる第2象限及び第4象限に属することになる。 The color values measured for the eyes of the subject are L = + 94.5, a = + 1.4, b = + 7.6 for the white eye portion, L = + 42.2 for the black eye portion, a = + 6.4, b = If it is +1.6, c = 1.1 from the above formula, and since it is smaller than the reference value 1.33, it becomes gentle. Further, the color values of the subject's eyes are L = + 95.1, a = + 1.7, b = −2.8 for the white part, and L = + 14.1, a = −0.6, b = for the black part. If it is +0.7, c = 2.36 from the above formula, which is larger than the reference value 1.33, so that it becomes vivid. In this case, if the evaluation is vivid, the face part attribute measurement result includes a first 45-degree diagonal virtual axis (contrast tendency) that passes through the origin in the orthogonal coordinate system shown in FIG. It belongs to the quadrant and the third quadrant. Further, when the evaluation is gentle, it belongs to the second quadrant and the fourth quadrant that include a 45-degree diagonal virtual axis (gradient tendency) passing through the origin in the orthogonal coordinate system.
 さらにまた、選択顔部位が肌及び頭髪であり、顔部位属性が色の「鮮やかさ-穏やか」である場合、評価部22は肌の部分及び髪の部分の色彩値の差を求めて、これと前記記憶部から呼び出した前記色彩値の差の基準値と対比する。この場合、顔部位属性基準値(彩度)は、肌部分にてL=+62.5、a=+1.9、b=+11.8、髪部分にてL=+31.9、a=+5.5、b=+7.3であるので、上式よりc=2.81となる。対象者について測色した色彩値が、肌部分でL=+62.5、a=+1.9、b=+11.8、髪部分がL=+31.9、a=+5.5、b=+7.3であれば、上式からc=2.66となり、基準値2.81より小さいので穏やかになる。また、肌部分でL=+87.7、a=+7.1、b=+6.6、髪部分がL=+14.2、a=+2.4、b=+1.5であれば、上式からc=3.57となり、基準値2.81よりも大きいので鮮やかとなる。この場合、評価が鮮やかである場合、図6に示す直交座標系のにおいて原点を通る右上がりの斜め45度の仮想軸(コントラスト傾向性)に相当)が含まれる第1象限及び第3象限に属することになる。また、評価が穏やかな場合、図7に直交座標系において原点を通る右下がりの斜め45度の仮想軸(グラデーション傾向性)が含まれる第2象限及び第4象限に属することになる。 Furthermore, when the selected face part is skin and hair and the face part attribute is “brightness-gentle” of the color, the evaluation unit 22 obtains the difference in color value between the skin part and the hair part, And the reference value of the difference between the color values called from the storage unit. In this case, the facial part attribute reference value (saturation) is L = + 62.5, a = + 1.9, b = + 11.8 in the skin part, L = + 31.9, a = + 5. Since 5, b = + 7.3, c = 2.81 from the above formula. The color values measured for the subject are L = + 62.5, a = + 1.9, b = + 11.8 in the skin portion, L = + 31.9, a = + 5.5, b = + 7. If it is 3, c = 2.66 from the above formula, and since it is smaller than the reference value 2.81, it becomes gentle. If the skin part is L = + 87.7, a = + 7.1, b = + 6.6, and the hair part is L = + 14.2, a = + 2.4, b = + 1.5, then Since c = 3.57, which is larger than the reference value 2.81, the image becomes bright. In this case, when the evaluation is vivid, the first quadrant and the third quadrant that include a 45-degree obliquely upward virtual axis (corresponding to contrast tendency) passing through the origin in the orthogonal coordinate system shown in FIG. 6 are included. Will belong. Further, when the evaluation is moderate, it belongs to the second quadrant and the fourth quadrant, which include a 45-degree diagonal virtual axis (gradient tendency) passing through the origin in the orthogonal coordinate system in FIG.
 さらにまた、選択顔部位が肌の場合、顔部位属性「質感」として「薄い-厚い」及び「マット(つや消し)-つや(有り)」という2つの属性がある(図5参照)。ここで、前者の属性(「薄い-厚い」)が選択され、顔部位属性測定結果として肌部分における表皮の厚さを用いる場合には、評価部22は肌部分における表皮の厚さ測定値を記憶部3から呼び出した表皮の厚さの基準値と対比する。評価部22が取得するのは表皮の厚さ(単位はmm)であり、記憶部3から呼び出される表皮の厚さ基準値(顔部位属性基準値)は0.2mmに設定されている。対象者について測定した表皮の厚さが0.1mmである場合、基準値0.2mmよりも小さいので、「薄い」と評価され、表皮の厚さが0.3mmの場合には、基準値0.2mmよりも大きいので、「厚い」と評価される。この場合の顔部位属性値は、「薄い-厚い」ライン(図6に示すスタイリングマップの縦軸(ライト-ディープ傾向性)に相当)上、顔部位属性基準値0.2mmを境(原点)にして「薄い」側、又は「厚い」側に属することになる。 Furthermore, when the selected face part is skin, the face part attribute “texture” has two attributes of “thin-thick” and “matt (matte) -gloss (present)” (see FIG. 5). Here, when the former attribute (“thin-thick”) is selected and the thickness of the epidermis in the skin part is used as the facial part attribute measurement result, the evaluation unit 22 uses the measured thickness value of the epidermis in the skin part. Contrast with the reference value of the thickness of the skin called from the storage unit 3. The evaluation unit 22 acquires the thickness of the epidermis (unit: mm), and the epidermis thickness reference value (face part attribute reference value) called from the storage unit 3 is set to 0.2 mm. When the thickness of the epidermis measured for the subject is 0.1 mm, it is evaluated as “thin” because it is smaller than the reference value 0.2 mm. When the thickness of the epidermis is 0.3 mm, the reference value 0 Since it is larger than 2 mm, it is evaluated as “thick”. In this case, the facial part attribute value is on the “thin-thick” line (corresponding to the vertical axis (light-deep tendency) of the styling map shown in FIG. 6), and the boundary (origin) of the facial part attribute value is 0.2 mm. Thus, it belongs to the “thin” side or the “thick” side.
 後者の属性(「マット(つや消し)-つや(有り)」)が選択され、顔部位属性測定結果として肌油分量を取得している場合、評価部22は対象者についての肌油分量を記憶部3から呼び出した肌油分量基準値と対比する。この場合の肌油分量の基準値(顔部位属性基準値)は12%である。対象者の肌油分量測定値(顔部位属性測定結果)が5%であれば、基準値12%よりも低いので、マットであり、反対に肌油分量測定値が20%であれば、基準値12%よりも高いので、ツヤありとなる。この場合の顔部位属性測定結果は、評価がマットである場合、図7の直交座標系のにおいて原点を通る右下がりの斜め45度の仮想軸(グラデーション傾向性)に相当)が含まれる第2象限及び第4象限に属することになる。また、評価がつやありの場合、図6に示す直交座標系において原点を通る右上がりの斜め45度の仮想軸(コントラスト傾向性)が含まれる第1象限及び第3象限に属することになる。 When the latter attribute (“matte (matte) —shiny (present)”) is selected and the skin oil amount is acquired as the facial part attribute measurement result, the evaluation unit 22 stores the skin oil amount for the subject. Contrast with the skin oil amount reference value called from 3. In this case, the reference value of the skin oil amount (facial part attribute reference value) is 12%. If the subject's measured skin oil content (facial part attribute measurement result) is 5%, it is lower than the standard value of 12%, so it is a mat, and conversely, if the measured skin oil content is 20%, the standard Since the value is higher than 12%, it is glossy. If the evaluation is a mat, the face part attribute measurement result in this case includes a second 45-degree hypothetical axis (gradient tendency) that passes through the origin in the orthogonal coordinate system of FIG. It belongs to the quadrant and the fourth quadrant. Further, when the evaluation is glossy, it belongs to the first quadrant and the third quadrant that include a 45-degree diagonal virtual axis (contrast tendency) that goes upward through the origin in the orthogonal coordinate system shown in FIG.
 さらにまた、選択顔部位が頭髪の場合、顔部位属性「質感」として「フラット-凹凸」及び「細い-太い」という2つの属性がある(図5参照)。ここで、前者の属性(「フラット-凹凸」)が選択されている場合には、評価部22は顔部位属性測定結果としての髪の毛の断面形状を記憶部3から呼び出した断面形状の基準値と対比する。ここで、記憶部3から呼び出される断面形状(顔部位属性基準値)は楕円に設定されている。対象者について測定した髪の毛の断面形状が略正円である場合、「フラット」と近似的に評価され、断面形状が略三角形の場合には、「凹凸あり」と近似的に評価される。この場合の顔部位属性値は、「フラット-凹凸」ライン(図6に示すスタイリングマップの横軸(クール-ウォーム傾向性)に相当)上、顔部位属性基準値楕円を境(原点)にしてより略正円であれば「フラット」側、略三角形であれば「凹凸」側に属することになる。 Furthermore, when the selected face part is hair, the face part attribute “texture” has two attributes “flat-uneven” and “thin-thick” (see FIG. 5). Here, when the former attribute (“flat-unevenness”) is selected, the evaluation unit 22 obtains the cross-sectional shape of the hair as the facial part attribute measurement result from the storage unit 3 and the reference value of the cross-sectional shape. Contrast. Here, the cross-sectional shape (face part attribute reference value) called from the storage unit 3 is set to an ellipse. When the cross-sectional shape of the hair measured for the subject is approximately a perfect circle, it is approximately evaluated as “flat”, and when the cross-sectional shape is approximately triangular, it is approximately evaluated as “with unevenness”. In this case, the facial part attribute value is on the “flat-uneven” line (corresponding to the horizontal axis (cool-warm tendency) of the styling map shown in FIG. 6), and the facial part attribute reference value ellipse is the boundary (origin). If it is a more or less perfect circle, it belongs to the “flat” side, and if it is approximately a triangle, it belongs to the “uneven” side.
 後者の属性(「細い-太い」)が選択されている場合、評価部22は当該顔部位属性測定結果としての髪の毛の直径を記憶部3から呼び出した髪の毛の直径の基準値と対比する。この場合の直径の基準値(顔部位属性基準値)は0.08mmである。対象者の直径の測定値(顔部位属性測定結果)が0.06mmであれば、基準値0.08mmよりも小さいので、「細い」と評価され、反対に直径測定値が0.1mmであれば、基準値0.08mmよりも大きいので、「太い」と評価される。この場合の顔部位属性測定結果は、図7の直交座標系の「細い-太い」ライン(図6に示すスタイリングマップの縦軸(ライト-ディープ傾向性)に相当)上、顔部位属性基準値0.08mmを境(原点)にして「薄い」側、又は「厚い」側に属することになる。 When the latter attribute (“thin-thick”) is selected, the evaluation unit 22 compares the hair diameter as the facial part attribute measurement result with the reference value of the hair diameter called from the storage unit 3. In this case, the diameter reference value (face part attribute reference value) is 0.08 mm. If the measured value of the subject's diameter (facial part attribute measurement result) is 0.06 mm, it is evaluated as “thin” because it is smaller than the reference value 0.08 mm, and conversely the diameter measured value is 0.1 mm. For example, since it is larger than the reference value 0.08 mm, it is evaluated as “thick”. In this case, the facial part attribute measurement result is the face part attribute reference value on the “thin-thick” line (corresponding to the vertical axis (light-deep tendency) of the styling map shown in FIG. 6) of the orthogonal coordinate system of FIG. It belongs to the “thin” side or the “thick” side with 0.08 mm as the boundary (origin).
 対象者による選択顔部位若しくは顔部位属性、又は双方の選択数が複数である場合、評価部22では、選択顔部位及び顔部位属性と顔部位属性測定結果についての対比結果とをそれぞれ適合情報カテゴリー特定部23に送る。 When there are a plurality of selections of the selected face part or face part attribute by the subject or both, the evaluation unit 22 sets the selected face part and the face part attribute and the comparison result of the face part attribute measurement result to the corresponding information category. The data is sent to the specifying unit 23.
[適合情報カテゴリー特定部23]
 適合情報カテゴリー特定部23は、評価部22から送られた選択顔部位及び顔部位属性の選択結果のそれぞれから少なくとも1つの所定の傾向性を呼び出すとともに、顔部位属性測定結果の対比結果に基づいて当該各傾向性に関連づけられた少なくとも1つの適合情報カテゴリーを特定し、この特定結果を情報選出部24に送るように構成されている。評価部22からの顔部位属性測定結果が1つの場合、適合情報カテゴリー特定部23はその顔部位属性に対応する傾向性を記憶部3から呼び出す。そうして、顔部位属性測定結果に基づいて当該傾向性によって分類される2つの適合情報カテゴリーのうちのいずれかを特定し、情報選出部24に対して特定した適合情報カテゴリーに属する対象者情報群を選出するように指令を発する。
[Compliance information category identification unit 23]
The matching information category specifying unit 23 calls at least one predetermined tendency from each of the selected face part and the face part attribute selection results sent from the evaluation unit 22, and based on the comparison result of the face part attribute measurement results It is configured to identify at least one matching information category associated with each tendency and send the identification result to the information selection unit 24. When there is one face part attribute measurement result from the evaluation unit 22, the matching information category specifying unit 23 calls a tendency corresponding to the face part attribute from the storage unit 3. Then, based on the facial part attribute measurement result, one of the two matching information categories classified by the tendency is specified, and the subject information belonging to the matching information category specified to the information selection unit 24 Issue a command to select a group.
 例えば、顔部位が眉で、顔部位属性「形・大きさ」の属性「直線(直線的)-曲線(曲線的)」であり、評価部22での評価結果が「直線」である場合、適合情報カテゴリー特定部23は、クール-ウォーム傾向性を記憶部3から呼び出し、図6のスタイリングマップ上、横軸のクール-ウォーム傾向性によって区画される第2象限(アクアテイスト)及び第3象限(クリスタルテイスト)を適合情報カテゴリーとして特定する。また、顔部位が肌で、顔部位属性がその色(明暗)であり、評価部22での評価結果が「明るい」であった場合、適合情報カテゴリー特定部23は、記憶部3からライト-ディープ傾向性図6のスタイリングマップ上、縦軸のライト-ディープ傾向性によって区画される第1象限(ブライトテイスト)及び第2象限(アクアテイスト)を適合情報カテゴリーとして特定する。さらに、顔部位が肌及び頭髪で、顔部位属性がその色(穏やか-鮮やか)であり、評価部22での評価結果が「穏やか」であった場合、適合情報カテゴリー特定部23は、記憶部3からグラデーション-コントラスト傾向性を呼び出し、図6のスタイリングマップ上、原点を通る斜め45度の右下がりの仮想軸が含まれる第2象限(アクアテイスト)及び第4象限(アーステイスト)を適合情報カテゴリーとして特定する。 For example, when the face part is an eyebrow, the face part attribute “shape / size” has the attribute “straight line (straight) -curve (curve)”, and the evaluation result in the evaluation unit 22 is “straight line”. The matching information category specifying unit 23 calls the cool-warm tendency from the storage unit 3, and on the styling map of FIG. 6, the second quadrant (aqua taste) and the third quadrant partitioned by the cool-warm tendency on the horizontal axis. (Crystal taste) is specified as the relevant information category. If the face part is skin, the face part attribute is the color (light and dark), and the evaluation result in the evaluation unit 22 is “bright”, the matching information category specifying unit 23 writes the light- Deep Tendency On the styling map of FIG. 6, the first quadrant (bright taste) and the second quadrant (aqua taste) partitioned by the light-deep tendency on the vertical axis are specified as the matching information category. Further, when the face part is skin and hair, the face part attribute is the color (mild-bright), and the evaluation result in the evaluation unit 22 is “gentle”, the matching information category specifying unit 23 stores the storage unit The gradation-contrast tendency is called from 3, and the second quadrant (aqua taste) and the fourth quadrant (artist) including the hypothetical axis of 45 degrees obliquely passing through the origin on the styling map in FIG. Identify as a category.
 また、対象者が顔部位及び顔部位属性を以下の(a)~(c)などのように選択することで、適合情報カテゴリー特定部23が前記3つの傾向性のうちの少なくとも2つの傾向性についてそれぞれ適合情報カテゴリーを特定することになる場合には、図6に示したスタイリングマップ上にそれぞれの適合情報カテゴリーを重畳させて配置し、重複する適合情報カテゴリーを特定することで当該カテゴリーの絞り込みを実施することができる。これにより、結果として1つの傾向性の場合の任意の2つの象限を含む適合情報カテゴリーに含まれるものよりも絞り込みによってより的確な(対象者により適合する)対象者情報群を提供できるようになる。
(a)1の選択顔部位に対して1の顔部位属性のうちの2つ1組の属性を選択する場合
 例えば、図5において、選択顔部位が肌であり顔部位属性「色」の2つ1組の属性を選択する場合である。この場合、評価部22での対比結果がそれぞれ「イエロー」及び「明るい」であれば、適合情報カテゴリー特定部23は、前者の適合情報カテゴリーである第1象限(ブライトテイスト)及び第4象限(アーステイスト)と後者のカテゴリーである第1象限(ブライトテイスト)及び第2象限(アクアテイスト)とを重畳させて両者が重複する第1象限(ブライトテイスト)に絞り込んで適合情報カテゴリーとして特定する。
In addition, when the target person selects the face part and the face part attribute as shown in the following (a) to (c), the conformity information category specifying unit 23 causes at least two tendencies of the three tendencies. When the conformance information category is to be specified for each, the conformity information categories are superimposed on the styling map shown in FIG. 6 and the corresponding conformance information categories are identified to narrow down the category. Can be implemented. As a result, it becomes possible to provide a target information group that is more accurate (applicable to the target person) by narrowing down than those included in the compatible information category including any two quadrants in the case of one tendency. .
(A) When selecting two sets of attributes of one face part attribute for one selected face part For example, in FIG. 5, the selected face part is skin and the face part attribute “color” is 2 This is a case of selecting one set of attributes. In this case, if the comparison results in the evaluation unit 22 are “yellow” and “bright”, respectively, the matching information category specifying unit 23 is the first quadrant (bright taste) and the fourth quadrant (the former matching information category). The first quadrant (bright taste) and the second quadrant (aqua taste), which are the latter category, are overlapped to narrow down to the first quadrant (bright taste) in which both overlap, and are specified as the matching information category.
(b)1の選択顔部位に対して2以上の顔部位属性にそれぞれ含まれる少なくとも1の属性を選択する場合
 例えば、図5において選択顔部位が肌であり、顔部位属性「色」の2つ1組の属性に追加して顔部位属性「質感」の属性「マット-ツヤ」をさらに選択する場合などである。この場合、評価部22での対比結果がそれぞれ「イエロー」、「明るい」及び「ツヤ」であれば、適合情報カテゴリー特定部23は前者のカテゴリーである第1象限(ブライトテイスト)、第2象限(アクアテイスト)、第4象限(アーステイスト)と後者のカテゴリーである第1象限(ブライトテイスト)、第3象限(クリスタルテイスト)とを重畳させて重複する第1象限(ブライトテイスト)を適合性が相対的に高い適合情報カテゴリーとして、またその他のカテゴリーも第1象限よりも相対的には低い適合性であるものの適合性を否定できないカテゴリーとして特定することになる。
(B) When selecting at least one attribute included in each of two or more face part attributes for one selected face part For example, in FIG. 5, the selected face part is skin and the face part attribute “color” is 2 For example, the face part attribute “texture” attribute “matte” is further selected in addition to a set of attributes. In this case, if the comparison results in the evaluation unit 22 are “yellow”, “bright”, and “shiny”, respectively, the matching information category specifying unit 23 is the first category (bright taste) and the second quadrant that are the former categories. (Aqua taste), 4th quadrant (Artist), and the latter category, 1st quadrant (Bright taste), 3rd quadrant (Crystal taste) are superimposed, and the 1st quadrant (Bright taste) that overlaps is compatible Is identified as a category with a relatively high conformity, and other categories are also identified as categories for which conformance cannot be denied although the conformity is relatively lower than the first quadrant.
(c)複数の選択顔部位について規定されているそれぞれの顔部位属性のうちから少なくとも各1の属性を選択する場合
 例えば、一の選択顔部位が眉で、顔部位属性「形・大きさ」の属性「直線(直線的)か曲線(曲線的)か」であり、他の選択顔部位が肌で、顔部位属性がその色(明暗)である場合である。この場合、評価部22でのそれぞれの評価結果が「直線」及び「明るい」であれば、適合情報カテゴリー特定部23は前者の適合情報カテゴリーの第2象限(アクアテイスト)及び第3象限(クリスタルテイスト)と、後者のカテゴリーである第1象限(ブライトテイスト)及び第2象限(アクアテイスト)とを重畳させ、重複する第2象限(アクアテイスト)を特定することになる。
 そうして適合情報カテゴリー特定部23は、ここで特定した適合情報カテゴリーを次の情報選出部24に送る。
(C) When selecting at least one attribute from each face part attribute defined for a plurality of selected face parts For example, one selected face part is an eyebrow, and the face part attribute “shape / size” This is a case where the other selected face part is skin and the face part attribute is the color (light and dark). In this case, if the respective evaluation results in the evaluation unit 22 are “straight line” and “bright”, the conformity information category specifying unit 23 performs the second quadrant (aqua taste) and the third quadrant (crystal) of the former conformance information category. The first quadrant (bright taste) and the second quadrant (aqua taste), which are the latter category, are overlapped to specify the overlapping second quadrant (aqua taste).
Then, the matching information category specifying unit 23 sends the matching information category specified here to the next information selection unit 24.
 このように、異なる傾向性が含まれるように複数の顔部位及び/又は顔部位属性を選択することで、さらに適合情報カテゴリーの絞り込みが可能であり、対象者に対してさらにより的確な対象者情報を提供できることになる。また仮に複数の顔部位についての顔部位属性測定結果が1つまたは2つの傾向性の両極の属性に属し、1つの適合情報カテゴリーに絞り込めず複数になってしまう場合には、順位付けをして適合情報カテゴリーを特定することができる。この場合、適合情報カテゴリー特定部23は、特定した複数の適合情報カテゴリーをその順位で情報選出部24に送ることで、おそらく最も適合する対象者情報(群)、次に適合する対象者情報(群)、・・・などのように順位を付けて対象者に提供できる利点がある。 In this way, by selecting a plurality of face parts and / or face part attributes so that different tendencies are included, it is possible to further narrow down the matching information category, and a more appropriate target person for the target person Information can be provided. In addition, if the face part attribute measurement results for a plurality of face parts belong to one or two tender bipolar attributes and cannot be narrowed down to one relevant information category, a ranking is given. To identify the relevant information category. In this case, the matching information category identification unit 23 sends the plurality of identified matching information categories to the information selection unit 24 in that order, so that the most suitable target information (group), and the next target information ( Group), ..., etc., and has an advantage that can be provided to the subject.
 次に、仮に3つの傾向性のそれぞれが2つ以上の顔部位属性基準値を有する場合には、以下のように適合情報カテゴリーを特定する。説明を簡便にするために、クール-ウォーム傾向性(スタイリングマップの横軸)及びライト-ディープ傾向性(スタイリングマップの縦軸)の2つの傾向性の場合、これら各傾向性について、適合情報カテゴリーは最大の顔部位属性基準値以上、2番目の顔部位属性基準値以上、最大の顔部位属性基準値未満、3番目の顔部位属性基準値以上、2番目の顔部位属性基準値未満、・・・に分けられる。スタイリングマップを共通にして、これらをそれぞれ当該スタイリングマップ上に投影させると、碁盤の目状に区画される。複数の顔部位属性測定結果はこの碁盤の目のいずれかに含まれる(プロッタされる)ことになるので、前記区画単位でプロット数の大小によって順位を付けて適合情報カテゴリーを特定し、この特定した結果を当該順位で情報選出部24に送るようにする。次に、さらにグラデーション-コントラスト傾向性を含める場合には、スタイリングマップ上の斜めの仮想軸上の測定結果のプロットが前記いずれかの碁盤の目状の区画に含まれるので、同様に前記区画単位でプロット数の大小によって順位を付けて適合情報カテゴリーを特定し、この特定した結果を当該順位で情報選出部24に送るようにする。3つの傾向性のそれぞれが2つ以上の顔部位属性基準値を有する場合には、以上のようにスタイリングマップ上、属性の強弱、つまり原点からの距離に応じた属性の度合いを踏まえた対象者情報(群)を提供することができる。 Next, if each of the three tendencies has two or more facial part attribute reference values, the matching information category is specified as follows. For ease of explanation, in the case of two tendencies, cool-warm propensity (horizontal axis of styling map) and light-deep propensity (vertical axis of styling map) Is greater than the maximum face part attribute reference value, greater than the second face part attribute reference value, less than the maximum face part attribute reference value, greater than the third face part attribute reference value, less than the second face part attribute reference value,・ ・When the styling maps are shared and projected onto the styling map, they are partitioned into a grid pattern. Since multiple face part attribute measurement results will be included (plotted) in any of the eyes of this board, the relevant information category is specified by ranking according to the number of plots in the unit of the division, and this specification The results are sent to the information selection unit 24 in this order. Next, when the gradation-contrast tendency is further included, since the plot of the measurement result on the oblique virtual axis on the styling map is included in one of the grid-like sections of the board, similarly, the section unit In order to identify the matching information category by ranking according to the number of plots, the identified result is sent to the information selection unit 24 in the ranking. When each of the three tendencies has two or more facial part attribute reference values, the subject is based on the strength of the attribute on the styling map as described above, that is, the degree of attribute according to the distance from the origin. Information (s) can be provided.
[情報選出部24~出力部25]
 情報選出部24は、適合情報カテゴリー特定部23からの特定結果に基づいて特定された当該適合情報カテゴリーに属する少なくとも1つの対象者情報を記憶部3における対象者情報群の中から選出し、出力部25は、当該選出された各対象者情報を出力するように構成されている。本実施形態においては、ディスプレー上に表示させ、プリンターに印字させるように構成しているが、出力の方法はこれらに限定されず、他の方法によってもよい。
[Information Selection Unit 24 to Output Unit 25]
The information selection unit 24 selects at least one target person information belonging to the relevant information category specified based on the specification result from the relevant information category specification unit 23 from the target person information group in the storage unit 3 and outputs it. The unit 25 is configured to output each selected target person information. In this embodiment, the image is displayed on the display and printed by the printer. However, the output method is not limited to these, and another method may be used.
本発明の情報提供方法
 次に、図14~図17を参照して、本発明に係る情報提供装置1を用いる情報提供方法の一例について説明する。図14は、図1に示す情報提供装置を用いる情報提供方法の全体の手順を示すフローチャート、図15は顔部位属性が形・大きさである場合の顔部位属性値測定の手順を示すフローチャート、図16は顔部位属性が色である場合の顔部位属性値測定の手順を示すフローチャートを示している。
Information Providing Method of the Present Invention Next, an example of an information providing method using the information providing apparatus 1 according to the present invention will be described with reference to FIGS. FIG. 14 is a flowchart showing the overall procedure of the information providing method using the information providing apparatus shown in FIG. 1, and FIG. 15 is a flowchart showing the procedure of measuring the facial part attribute value when the facial part attribute has a shape and size. FIG. 16 is a flowchart showing a procedure for measuring a facial part attribute value when the facial part attribute is a color.
 図14に示すように、対象者はまず性別の入力操作を行うことで、本発明の情報提供装置1は入力端末9から性別を取得する(ステップS1)。次に、演算制御部2は、ディスプレイ10に前記各顔部位のリストなどを表示させ、対象者は、入力端末9を用い、該リストの中から少なくとも1つの顔部位(目、鼻、口、肌、頭髪)を選択する(ステップS2)。これにより、選択顔部位が定まる。 As shown in FIG. 14, the subject first performs a gender input operation, so that the information providing apparatus 1 of the present invention acquires the gender from the input terminal 9 (step S1). Next, the calculation control unit 2 causes the display 10 to display a list of each face part and the like, and the subject uses the input terminal 9 to select at least one face part (eyes, nose, mouth, (Skin, hair) is selected (step S2). Thereby, the selected face part is determined.
 続いて、本発明の情報提供装置1は、前記各選択顔部位について、演算制御部2の機能によりディスプレイ10に図5を基礎にして顔部位属性(形・大きさ、色、質(質感))及び図5に示す属性のそれぞれを選択可能なように表示させるので、対象者はその中から顔部位属性を選択する(ステップS3)。そうして選択された顔部位属性について、本発明の情報提供装置1は次の(1)及び(2)のうちのいずれかの方法により顔部位属性測定結果を取得する(ステップS4)。その後、属性情報取得部21では、取得した選択顔部位及び顔部位属性の選択結果と顔部位属性測定結果とを評価部22に送る。 Subsequently, the information providing apparatus 1 according to the present invention uses the function of the arithmetic control unit 2 to display the face part attributes (shape / size, color, quality (texture)) on the display 10 based on FIG. ) And the attributes shown in FIG. 5 are displayed so as to be selectable, and the target person selects a face part attribute from the attributes (step S3). With respect to the face part attribute thus selected, the information providing apparatus 1 of the present invention acquires the face part attribute measurement result by any one of the following methods (1) and (2) (step S4). Thereafter, the attribute information acquisition unit 21 sends the selection result of the acquired selected face part and face part attribute and the face part attribute measurement result to the evaluation unit 22.
(1)  対象者の顔画像データや3Dスキャニングデータより顔部位属性測定結果を求める方法
 本発明の情報提供装置1が対象者よりその顔を撮影した顔画像データなどを取得後、当該データより選択顔部位及び選択された顔部位属性について予め規定されている方法で顔部位属性測定結果を求めるものである。図15に示すように、属性情報取得部21は先ず顔画像データの取得を行う(ステップS21)。具体的には、演算制御部2が、不図示のデジタルカメラや3Dスキャナー装置などによって対象者の顔正面画像及び必要な場合には所定角度からの顔画像を撮影した顔画像データを取得する。顔画像データは、デバイスドライバ6を介して記憶部3に格納される。なお、顔画像データは、これをデジタルデータとして取得、処理できればよく、デジタルカメラや3Dスキャナー装置で撮像したものに限定されず、例えば対象者の顔又は上半身を撮影したアナログ写真をスキャナで読み込んだものであってもよい。
(1) Method for obtaining face part attribute measurement result from subject's face image data or 3D scanning data Information acquisition device 1 of the present invention obtains face image data of the face taken from the subject and then selects from the data The face part attribute measurement result is obtained by a method defined in advance for the face part and the selected face part attribute. As shown in FIG. 15, the attribute information acquisition unit 21 first acquires face image data (step S21). Specifically, the arithmetic control unit 2 acquires a front image of the subject's face and, if necessary, face image data obtained by capturing a face image from a predetermined angle using a digital camera (not shown) or a 3D scanner device. The face image data is stored in the storage unit 3 via the device driver 6. The face image data only needs to be acquired and processed as digital data. The face image data is not limited to those captured by a digital camera or a 3D scanner device. It may be a thing.
 次に、顔画像データから公知の認識技術を用い顔の輪郭や頭髪の部分を認識して、顔領域の抽出を行う(ステップS22)。ここでは、顔領域とは、顔のうち頭髪の部分を除く領域を顔領域とする。そうして記憶部3から基準顔形状の基準顔サイズLなど(図4参照)を取得し、顔領域の顔サイズLmなどと記憶部3から取得した基準顔サイズLなどとを対比して補正値を求め、この補正値によって顔画像データを拡大縮小することでサイズ合わせを行う(ステップS23)。 Next, the face contour and the hair portion are recognized from the face image data using a known recognition technique, and the face area is extracted (step S22). Here, the face area is defined as an area excluding the hair portion of the face. Then, the reference face size L of the reference face shape (see FIG. 4) is acquired from the storage unit 3, and the face size Lm of the face area is compared with the reference face size L acquired from the storage unit 3 for correction. A value is obtained, and size adjustment is performed by enlarging / reducing the face image data by this correction value (step S23).
 次に、サイズ合わせした顔画像データに基づいて、対象者の選択(図14、ステップS2参照)に応じて少なくとも1つの選択顔部位を抽出し(ステップS24)、当該各選択顔部位について規定されている各寸法(顔部位属性測定結果)を当該顔画像データから求め、これを顔部位属性測定結果を取得する(ステップS25)。このとき、タッチパネル等の入力端末5から別の顔部位属性についての顔部位属性測定結果を直接入力することもできる。例えば、顔部位が眉であり、顔部位属性「形・大きさ」における「直線―曲線」である場合、眉の円弧の半径を前記のようにして測定する。なお、図示していないが、3Dスキャニングデータの場合も、概ね前記した2次元の顔画像データの場合と同様のフローとなる。また、前記ステップ22とステップ23とは順序を逆にして行ってもよい。即ち、対象者の選択結果に応じて顔部位及び顔部位属性を取得し、顔部位属性測定結果を得た後に当該測定結果に補正値を乗じてサイズ合わせをすることも可能である。 Next, based on the size-matched face image data, at least one selected face part is extracted (step S24) according to the selection by the subject (see step S2 in FIG. 14), and each selected face part is defined. Each measured dimension (facial part attribute measurement result) is obtained from the face image data, and the face part attribute measurement result is obtained (step S25). At this time, it is also possible to directly input a face part attribute measurement result for another face part attribute from the input terminal 5 such as a touch panel. For example, when the face part is an eyebrow and the face part attribute is “straight-curve” in the “shape / size”, the radius of the arc of the eyebrow is measured as described above. Although not shown, the flow of 3D scanning data is almost the same as that of the above-described 2D face image data. Further, the steps 22 and 23 may be performed in the reverse order. That is, it is possible to acquire the face part and face part attribute according to the selection result of the subject, obtain the face part attribute measurement result, and then multiply the measurement result by the correction value to perform size matching.
(2)対象者の顔部位属性測定結果入力による方法
 対象者が、本発明の情報提供装置1に選択顔部位について求めた属性の測定結果を入力し、これを顔部位属性測定結果とするものである。例えば、顔部位が眉であり、顔部位属性「形・大きさ」における「直線―曲線」である場合、眉の円弧の半径の測定値を直接、入力端末5から入力して顔部位属性測定結果とする。こうして得られた顔部位属性測定結果に、前記補正値を乗じる。
(2) Method by inputting the face part attribute measurement result of the subject The subject inputs the attribute measurement result obtained for the selected face part to the information providing apparatus 1 of the present invention, and uses this as the face part attribute measurement result It is. For example, when the face part is an eyebrow and the face part attribute is “straight-curve” in the attribute “shape / size”, the measurement value of the radius of the arc of the eyebrow is directly input from the input terminal 5 to measure the face part attribute. As a result. The face part attribute measurement result thus obtained is multiplied by the correction value.
 次に、図16を参照して、顔画像データから顔部位属性測定結果である色彩値を得る方法について詳細に説明する。まず、属性情報取得部21は、当該顔画像データを取得し、画像データ解析部25に送る(ステップS31)。画像データ解析部25では、送られた顔画像データから顔の輪郭や頭髪を認識して、顔領域の抽出を行う(ステップS32)。選択顔部位が頭髪の色の場合には、ここで領域の抽出の代わりに、顔画像データから顔の輪郭や頭髪を認識して、頭髪領域の抽出をしても良い。次に、顔領域から適宜の方法により色彩値測定点を選択する(ステップS33)。色彩値測定点は代表的な結果が得られれば、1箇所選択するのでもよいし、複数箇所選択するのでもよい。そうして、当該色彩値測定点の色彩値を測色する(ステップS34)。測定方法は、公知の方法で行われる。続いて測定された測色結果(測定点を複数にする場合には、各店での測色結果を平均化した結果)を顔部位属性値として記憶部3に格納する(ステップS35)。なお、色彩値は、前記と同様の方法により3Dスキャニングデータから得ることができる。 Next, with reference to FIG. 16, a method for obtaining a color value as a facial part attribute measurement result from face image data will be described in detail. First, the attribute information acquisition unit 21 acquires the face image data and sends it to the image data analysis unit 25 (step S31). The image data analysis unit 25 recognizes the outline of the face and the hair from the sent face image data, and extracts a face area (step S32). In the case where the selected face portion is the color of the hair, instead of extracting the region, the hair region may be extracted by recognizing the outline of the face and the hair from the face image data. Next, a color value measurement point is selected from the face area by an appropriate method (step S33). The color value measurement point may be selected at one place or a plurality of places as long as a representative result is obtained. Then, the color value at the color value measurement point is measured (step S34). The measurement method is performed by a known method. Subsequently, the measured color measurement results (the result of averaging the color measurement results at each store when a plurality of measurement points are used) are stored in the storage unit 3 as face part attribute values (step S35). The color value can be obtained from 3D scanning data by the same method as described above.
 図14に戻り、さらに別の顔部位を取得するか否かを対象者の入力操作によって確認する(ステップS5)。別の顔部位を取得する場合には、再びステップS2に戻り、結果として複数の顔部位を選択することができる。また、別の顔部位を取得しない場合にはステップS6に進む。 Returning to FIG. 14, it is confirmed by the input operation of the subject whether or not another face part is to be acquired (step S5). When acquiring another face part, the process returns to step S2 again, and as a result, a plurality of face parts can be selected. If another face part is not acquired, the process proceeds to step S6.
 次に、対象者の入力による選択操作に基づいてアンケートを実施するか否かを判断する(ステップS6)。対象者がアンケートを選択した場合、属性情報取得部21は次に、対象者による入力端末9への入力によりアンケートの複数の回答データを取得する(ステップS7)。アンケート情報の項目が2以上ある場合には、複数項目のアンケート情報を一度に表示させて、対象者に個々の回答文について一度に選択させても良いし、一問一答方式(設問と複数の回答文をセットで)で複数回に分けて回答文を選択させてもよい。属性情報取得部21は取得された回答データを記憶部3に格納する。 Next, it is determined whether or not to conduct a questionnaire based on the selection operation by the input of the target person (step S6). When the target person selects a questionnaire, the attribute information acquisition unit 21 acquires a plurality of answer data of the questionnaire by the input to the input terminal 9 by the target person (step S7). When there are two or more items of questionnaire information, multiple items of questionnaire information may be displayed at a time, and the target person may select each answer sentence at one time. Answer sentences in a set) may be divided into multiple times to select answer sentences. The attribute information acquisition unit 21 stores the acquired answer data in the storage unit 3.
 評価部22では、以下の評価工程を実施する(ステップS8)。即ち、記憶部3から選択顔部位、顔部位属性の選択結果、顔部位属性測定結果及び顔部位属性基準値をそれぞれ呼び出し、顔部位属性測定結果と顔部位属性基準値との対比を行う。こうして、1つ以上の顔部位属性の評価結果が得られる。また、対象者によるアンケート情報に対する複数の回答データがある場合には、当該回答データを記憶部3から呼び出し、当該回答データをブライトテイスト、アクアテイスト、クリスタルテイスト及びアーステイストの4つの各象限に割り振って当該各象限(カテゴリー)ごとのデータ件数を集計する。評価部22は、前記の対比結果及び集計結果を適合情報カテゴリー特定部23に送る。 The evaluation unit 22 performs the following evaluation process (step S8). That is, the selected face part, the face part attribute selection result, the face part attribute measurement result, and the face part attribute reference value are called from the storage unit 3, respectively, and the face part attribute measurement result and the face part attribute reference value are compared. In this way, one or more facial part attribute evaluation results are obtained. In addition, when there are a plurality of response data for the questionnaire information by the target person, the response data is called from the storage unit 3, and the response data is allocated to each of the four quadrants of bright taste, aqua taste, crystal taste, and artist. The number of data for each quadrant (category). The evaluation unit 22 sends the comparison results and the aggregation results to the matching information category specifying unit 23.
 適合情報カテゴリー特定部23では、評価部22からの選択された顔部位属性が1つである場合、当該顔部位属性について規定されている傾向性によって少なくとも1つの顔部位属性基準値を境にしてその両側の属性に関連する2つのカテゴリーのいずれか(図6に示すスタイリングマップにおいて、4つの象限のうちの2つからなる適合情報カテゴリー)を特定する。また、対象者によって1つの顔部位に規定されている2つの顔部位属性が選択された場合、又は複数の顔部位に規定されている顔部位属性から異なる傾向性に属する少なくとも2つが選択されている場合、適合情報カテゴリー特定部23は各顔部位属性についての傾向性と評価部22における対比結果とから、図6に示すスタイルマップ上、適合情報カテゴリーを重畳させて重複するものを適合性の高い適合情報カテゴリーとして特定する(ステップS9)。このように少なくとも2つの顔部位属性を選択した場合には、重複数が最も多い適合情報カテゴリーが対象者に見合う有用な情報を含んでいることになる。さらにアンケート情報の回答データがある場合、適合情報カテゴリー特定部23は前記特定結果にアンケートについての集計結果をスタイリングマップ上の象限ごとに集計し、重複数が多い順に対象者にとって適合性の高い適合情報カテゴリーとして特定する(ステップS9)。次に、情報選出部24にて適合情報カテゴリーの中から対象者情報を選出し(ステップS10)、選出された対象者情報を出力部7に出力する(ステップS11)。読み出された対象者情報群は、出力部7に送られ、ディスプレイ10に表示させ、あるいはプリンタ12により印刷することが可能となる。 In the matching information category specifying unit 23, when there is one face part attribute selected from the evaluation part 22, at least one face part attribute reference value is set as a boundary due to the tendency defined for the face part attribute. One of the two categories related to the attributes on both sides (the matching information category consisting of two of the four quadrants in the styling map shown in FIG. 6) is specified. Also, when two face part attributes defined for one face part are selected by the subject, or at least two belonging to different tendencies are selected from the face part attributes defined for a plurality of face parts. If there is a match, the matching information category specifying unit 23 superimposes the matching information categories by overlapping the matching information categories on the style map shown in FIG. 6 based on the tendency for each face part attribute and the comparison result in the evaluation unit 22. It is specified as a high matching information category (step S9). In this way, when at least two face part attributes are selected, the matching information category with the largest number of duplicates includes useful information suitable for the subject. Furthermore, when there is answer data of questionnaire information, the conformity information category identification unit 23 aggregates the aggregation results for the questionnaire for each quadrant on the styling map in the identification result, and the conformance that is highly compatible with the target in the order of the number of duplicates. It is specified as an information category (step S9). Next, the information selection unit 24 selects target person information from the matching information category (step S10), and outputs the selected target person information to the output unit 7 (step S11). The read target person information group is sent to the output unit 7 and can be displayed on the display 10 or printed by the printer 12.
 以上説明したように、本発明の情報提供装置及び情報提供方法は、前記のように、対象者など(顔部位属性測定結果や顔画像データの提供者を含む)が自身の顔を構成する複数の顔部位の中から選択した少なくとも1つの選択顔部位に係る少なくとも1つの顔部位属性測定結果に基づいて当該対象者に適合する有用なスタイリング(商品)情報(ネクタイ、シャツ、スーツの柄や色、メイクアップ用化粧品など)や対象者の性格や接客タイプに関する情報その他の情報やアドバイスなどを提供できる。その他の情報やアドバイスとしては、例えば食器に関する情報;家具、内装(床や壁の色など)などのインテリアに関する情報;ロゴの色などのグラフィックデザインに関する情報;広告、プロモーション、商品企画(パッケージの色など)に関する情報;スタイリングに関する検定の教材などが挙げられる。 As described above, the information providing apparatus and the information providing method according to the present invention include a plurality of subjects (including face part attribute measurement results and face image data providers) constituting their faces as described above. Useful styling (product) information (tie, shirt, suit pattern and color) suitable for the subject based on at least one face part attribute measurement result for at least one selected face part selected from , Cosmetics for makeup, etc.), information on the personality of the target person and customer service type, and other information and advice. Other information and advice include, for example, information about tableware; information about interiors such as furniture and interiors (colors of floors and walls, etc.); information about graphic designs such as logo colors; advertisements, promotions, product planning (colors of packages, etc.) ); Information on styling tests.
 また、本発明の情報提供装置及び情報提供方法によれば、顔部位及び顔部位属性と統計的に高い関連性を示す情報群が得られた場合には、当該情報群を適合情報カテゴリーごとに対象者情報群に分類包含させ、対象者に対するその後の情報提供に活用できる。また、情報提供装置内の記憶部に格納されている対象者情報群と統計的に高い関連性を示す顔部位属性が確認された場合などには、当該顔部位属性を傾向性や適合情報カテゴリーと関連させて本発明の情報提供装置における記憶部の顔部位情報に追加的に記憶させるなどして、対象者に対するその後の情報提供に活用できる。 Further, according to the information providing apparatus and the information providing method of the present invention, when an information group showing statistically high relevance to the face part and the face part attribute is obtained, the information group is classified for each relevant information category. The information can be classified and included in the target person information group and used for subsequent information provision to the target person. In addition, when a face part attribute that is statistically highly related to the target person information group stored in the storage unit in the information providing apparatus is confirmed, the face part attribute is set to a trend or a matching information category. The information can be additionally stored in the facial part information of the storage unit in the information providing apparatus of the present invention in association with the information, and can be utilized for subsequent information provision to the subject.
 なお、本発明は前記実施形態に限定されるものではなく、実施段階ではその要旨を逸脱しない範囲で構成要素を変形して具体化できる。また、上記実施形態に開示されている複数の構成要素の適宜な組み合わせにより、種々の発明を形成できる。例えば、実施形態に示される全構成要素から幾つかの構成要素を外したり、異なる実施形態にわたる構成要素を適宜組み合わせたりすることができる。 In addition, this invention is not limited to the said embodiment, In an implementation stage, a component can be deform | transformed and embodied in the range which does not deviate from the summary. In addition, various inventions can be formed by appropriately combining a plurality of components disclosed in the embodiment. For example, some constituent elements can be removed from all the constituent elements shown in the embodiments, or constituent elements over different embodiments can be appropriately combined.
1 情報提供装置
2 演算制御部
3 記憶部
4 内部バス
5 入力部
6 デバイスドライバ
7 出力部
9 入力端末
10 ディスプレイ
11 プリンタ
21 属性情報取得部
22 傾向性選択部
23 適合情報カテゴリー特定部
24 情報選出部
25 画像データ解析部

 
DESCRIPTION OF SYMBOLS 1 Information provision apparatus 2 Arithmetic control part 3 Memory | storage part 4 Internal bus 5 Input part 6 Device driver 7 Output part 9 Input terminal 10 Display 11 Printer 21 Attribute information acquisition part 22 Trend selection part 23 Conformity information category specification part 24 Information selection part 25 Image data analysis unit

Claims (13)

  1.  属性情報取得部、記憶部、評価部、適合情報カテゴリー特定部、情報選出部及び出力部を少なくとも含んでおり、
     前記属性情報取得部は、対象者自身の顔を構成する眉、目、鼻、口、肌及び頭髪を含む顔部位の中から選択した少なくとも1つの選択顔部位、前記選択顔部位について規定されている形状・大きさ、色及び質感を含む顔部位属性及び当該顔部位属性の測定結果を取得し、
     前記記憶部は、複数の顔部位情報と、複数の傾向性と、複数の適合情報カテゴリーと、当該各適合情報カテゴリーに属する複数の対象者情報群とを記憶し、
     前記顔部位情報のそれぞれは、前記顔部位のそれぞれについて予め規定された少なくとも1つの顔部位属性と、当該各顔部位属性の基準となる少なくとも1つの顔部位属性基準値とを含み、
    前記複数の傾向性はそれぞれ前記顔部位属性に関連し、前記各顔部位属性において前記顔部位属性基準値を含む直線軸として表され、当該直線軸上において前記少なくとも1つの顔部位属性基準値を境にして前記複数の対象者情報群を少なくとも2つの適合情報カテゴリーに分類するように規定されており、
     前記評価部は、前記属性情報取得部を通じて前記選択顔部位及び前記顔部位属性測定結果をそれぞれ取得し、前者に基づいて前記記憶部から呼び出した前記顔部位属性基準値と後者とを対比し、当該対比結果を前記適合情報カテゴリー抽出部に送り、
    前記適合情報カテゴリー特定部は、前記記憶部から、前記各顔部位属性に関連する少なくとも1つの傾向性を呼び出すとともに、前記対比結果に基づいて当該各傾向性に関連づけられた少なくとも1つの適合情報カテゴリーを特定し、
    前記情報選出部は、特定された当該適合情報カテゴリーに属する少なくとも1つの対象者情報を前記記憶部における対象者情報群の中から選出し、
    前記出力部は、当該選出された各対象者情報を出力するように構成されてなることを特徴とする情報提供装置。
    It includes at least an attribute information acquisition unit, a storage unit, an evaluation unit, a compatible information category identification unit, an information selection unit, and an output unit,
    The attribute information acquisition unit is defined for at least one selected face part selected from face parts including eyebrows, eyes, nose, mouth, skin, and hair constituting the subject's own face, and the selected face part. The face part attribute including the shape / size, color and texture of the face and the measurement result of the face part attribute,
    The storage unit stores a plurality of face part information, a plurality of tendencies, a plurality of matching information categories, and a plurality of target person information groups belonging to each of the matching information categories,
    Each of the face part information includes at least one face part attribute defined in advance for each of the face parts, and at least one face part attribute reference value serving as a reference for each face part attribute,
    Each of the plurality of tendencies is related to the face part attribute and is represented as a linear axis including the face part attribute reference value in each face part attribute, and the at least one face part attribute reference value is represented on the linear axis. It is stipulated to classify the plurality of target person information groups into at least two relevant information categories on the border,
    The evaluation unit acquires the selected face part and the face part attribute measurement result through the attribute information acquisition unit, and compares the face part attribute reference value called from the storage unit based on the former and the latter, Send the comparison result to the relevant information category extraction unit,
    The matching information category specifying unit calls at least one tendency associated with each face part attribute from the storage unit, and at least one matching information category associated with each tendency based on the comparison result. Identify
    The information selection unit selects at least one target person information belonging to the specified relevant information category from the target person information group in the storage unit,
    The information providing apparatus, wherein the output unit is configured to output each selected target person information.
  2.  前記記憶部はさらに、少なくとも1つの基準顔サイズを含む基準顔形状に関する情報を記憶しており、
    前記選択顔部位が眉、目、鼻及び口のうちの少なくとも1つであり、顔部位属性が形・大きさである場合、前記属性情報取得部は前記基準顔サイズと同様の測定により得られた前記対象者の顔サイズとともに当該各選択顔部位における所定の寸法を顔部位属性測定結果として取得して前記評価部に送り、
    前記評価部は、前記基準顔サイズを当該顔サイズで除した補正値を、取得した前記対象者の顔部位属性測定結果のうち前記顔サイズを除く各測定値に乗じた上で当該評価部は前記各寸法と前記記憶部から呼び出したそれぞれの寸法の基準値とを対比し、当該対比結果を前記適合情報カテゴリー抽出部に送るように構成されている請求項1に記載の情報提供装置。
    The storage unit further stores information on a reference face shape including at least one reference face size,
    When the selected face part is at least one of eyebrows, eyes, nose, and mouth, and the face part attribute is shape / size, the attribute information acquisition unit is obtained by the same measurement as the reference face size. In addition, the predetermined size in each selected face part together with the face size of the subject is acquired as a face part attribute measurement result and sent to the evaluation unit,
    The evaluation unit is configured to multiply the correction value obtained by dividing the reference face size by the face size by each measurement value excluding the face size from the acquired facial part attribute measurement results of the subject. The information providing apparatus according to claim 1, wherein the information providing apparatus is configured to compare each dimension with a reference value of each dimension called from the storage unit and send the comparison result to the matching information category extraction unit.
  3.  前記属性情報取得部はさらに画像データ解析部を備えており、前記顔部位属性測定結果に代えて又は前記顔部位属性測定結果とともに顔画像データを取得して前記画像データ解析部において所定の解析を行うものである請求項1又は2に記載の情報提供装置。 The attribute information acquisition unit further includes an image data analysis unit, acquires face image data instead of the face part attribute measurement result or together with the face part attribute measurement result, and performs a predetermined analysis in the image data analysis unit. The information providing device according to claim 1, wherein the information providing device is a device that performs the operation.
  4.  前記選択顔部位が頭髪であり、前記顔部位属性が質感である場合、前記画像データ解析部は当該選択顔部位における頭髪の断面形状を抽出して顔部位属性測定結果として前記評価部に送り、前記評価部は当該顔部位属性測定結果が真円、略三角形のいずれに近似するかを判定するように構成されている請求項3に記載の情報提供装置。 When the selected face part is hair and the face part attribute is texture, the image data analysis unit extracts the cross-sectional shape of the hair in the selected face part and sends it to the evaluation unit as a face part attribute measurement result, The information providing apparatus according to claim 3, wherein the evaluation unit is configured to determine whether the face part attribute measurement result approximates a perfect circle or a substantially triangular shape.
  5.  前記記憶部はさらに、互いの前記顔部位属性基準値で交差する所定の2つの傾向性によって区画された前記各適合情報カテゴリーに関連づけられたアンケート情報を有しており、前記対象者がアンケートを選択した場合、前記記憶部から前記アンケート情報を呼び出して前記対象者の操作に供するために前記出力部に表示させ、
    前記評価部は、前記属性情報取得部を通じて取得した、前記対象者によるアンケート情報に対する複数の回答データを前記各適合情報カテゴリーごとに割り振ってそれぞれのカテゴリーのデータ件数を集計し、
     前記適合情報カテゴリー特定部は、前記記憶部における前記適合情報カテゴリーを前記回答データ件数の多い順にランク付けして特定するように構成された請求項1~4のいずれか1項に記載の情報提供装置。
    The storage unit further includes questionnaire information associated with each matching information category divided by two predetermined tendencies that intersect each other with the face part attribute reference value, and the target person performs a questionnaire. If selected, the questionnaire information is called from the storage unit and displayed on the output unit for use by the target person.
    The evaluation unit, which is acquired through the attribute information acquisition unit, allocates a plurality of response data for the questionnaire information by the target person for each of the relevant information categories, and totals the number of data in each category,
    The information provision according to any one of claims 1 to 4, wherein the conformity information category identification unit is configured to rank and identify the conformance information categories in the storage unit in descending order of the number of response data items. apparatus.
  6.  前記対象者によって前記選択顔部位若しくは当該選択顔部位についての顔部位属性が複数選択された場合、又は前記顔部位属性とアンケート情報との組み合わせが選択された場合に、前記適合情報カテゴリー特定部は前記各顔部位属性に規定され直線性を示す少なくとも1つの傾向性をそれぞれ前記記憶部から呼び出すとともに、前記各対比結果に基づいて当該各傾向性に関連づけられた少なくとも1つの適合情報カテゴリーをそれぞれ特定し、前記複数の対象者情報群に対し、顔部位属性基準値を合致させて当該各適合情報カテゴリーを重畳させて重複するものに絞り込んで適合情報カテゴリーとして特定するように構成されてなる請求項1~5のいずれか1項に記載の情報提供装置。 When a plurality of face part attributes for the selected face part or the selected face part are selected by the subject, or when a combination of the face part attribute and questionnaire information is selected, the matching information category specifying unit is At least one tendency indicating linearity specified by each face part attribute is called from the storage unit, and at least one matching information category associated with each tendency is specified based on each comparison result. In addition, the plurality of target person information groups are configured to match the face part attribute reference value and superimpose the respective matching information categories so as to be narrowed down to be identified as matching information categories. 6. The information providing device according to any one of 1 to 5.
  7.  対象者が自身の顔を構成する眉、目、鼻、口、肌及び頭髪を含む顔部位の中から選択した少なくとも1つの選択顔部位と、当該顔部位について形状・大きさ、色及び質感の中から選択された少なくとも1つの顔部位属性と、当該顔部位属性について規定された所定の顔部位属性測定結果とを取得する属性情報取得工程と、
    前記選択顔部位及び前記顔部位属性測定結果の入力を受け、前者に基づいて記憶部から呼び出した顔部位属性基準値と後者とを対比し、当該対比結果を前記適合情報カテゴリー抽出部に送る評価工程と、
    前記各顔部位属性に関連し、直線軸として規定される少なくとも1つの傾向性を前記記憶部から呼び出すとともに、前記対比結果に基づいて当該各傾向性に関連づけられた少なくとも1つの適合情報カテゴリーを特定する適合情報カテゴリー特定工程と、
    特定された当該適合情報カテゴリーに属する少なくとも1つの対象者情報を前記記憶部における対象者情報群の中から選出する対象者情報選出工程と、
    当該選出された各対象者情報を出力する出力工程とを含むことを特徴とする情報提供方法。
    At least one selected face part selected from the face parts including the eyebrows, eyes, nose, mouth, skin, and hair constituting the face of the subject, and the shape / size, color, and texture of the face part An attribute information acquisition step of acquiring at least one face part attribute selected from the inside and a predetermined face part attribute measurement result defined for the face part attribute;
    Evaluation that receives input of the selected face part and the face part attribute measurement result, compares the face part attribute reference value called from the storage unit based on the former and the latter, and sends the comparison result to the matching information category extracting unit Process,
    At least one tendency defined as a linear axis related to each face part attribute is called from the storage unit, and at least one matching information category associated with each tendency is specified based on the comparison result The conformity information category identification process to be
    A subject information selection step of selecting at least one subject information belonging to the specified relevant information category from the subject information group in the storage unit;
    And an output step of outputting each selected subject information.
  8.  前記記憶部はさらに、基準顔サイズを含む基準顔形状に関する情報を記憶しており、
    前記選択顔部位が眉、目、鼻及び口のうちの少なくとも1つであり、顔部位属性が形・大きさである場合、前記属性情報取得部は前記基準顔サイズと同様の測定により得られた前記対象者の顔サイズとともに、当該各選択顔部位における所定の寸法を顔部位属性測定結果として取得してこれらを前記評価工程に送り、
    当該評価工程において、前記基準顔サイズを当該顔サイズで除した補正値を、取得した前記対象者の顔部位属性測定結果のうち前記顔サイズを除く各測定値に乗じるように構成されている請求項18に記載の情報提供方法。
    The storage unit further stores information on a reference face shape including a reference face size,
    When the selected face part is at least one of eyebrows, eyes, nose, and mouth, and the face part attribute is shape / size, the attribute information acquisition unit is obtained by the same measurement as the reference face size. In addition to the face size of the subject, the predetermined dimensions in each selected face part are acquired as face part attribute measurement results and these are sent to the evaluation step,
    In the evaluation step, the correction value obtained by dividing the reference face size by the face size is multiplied by each measurement value excluding the face size in the acquired face part attribute measurement result of the subject. Item 19. The information providing method according to Item 18.
  9.  前記属性情報取得工程はさらに画像データ解析工程を備えており、前記顔部位属性測定結果に代えて又は前記顔部位属性測定結果とともに顔画像データを取得して前記画像データ解析工程において所定の解析を行うものである請求項7又は8に記載の情報提供方法。 The attribute information acquisition step further includes an image data analysis step, wherein face image data is acquired instead of the face part attribute measurement result or together with the face part attribute measurement result, and a predetermined analysis is performed in the image data analysis step. The information providing method according to claim 7 or 8, wherein the information providing method is performed.
  10.  前記選択顔部位が頭髪であり、前記顔部位属性が質感である場合、前記画像データ解析工程は当該選択顔部位における頭髪の断面形状を抽出して顔部位属性測定結果として前記評価工程に送り、当該評価工程は当該顔部位属性測定結果が真円、略三角形のいずれに近似するかを判定するように構成されている請求項9に記載の情報提供方法。 When the selected face part is hair and the face part attribute is texture, the image data analysis step extracts the cross-sectional shape of the hair in the selected face part and sends it to the evaluation step as a face part attribute measurement result, The information providing method according to claim 9, wherein the evaluation step is configured to determine whether the face part attribute measurement result approximates a perfect circle or a substantially triangular shape.
  11.  前記記憶部はさらに、互いに前記顔部位属性基準値で直交する所定の2つの傾向性によって区画された前記各適合情報カテゴリーに関連づけられたアンケート情報を有しており、前記対象者がアンケートを選択した場合、前記記憶部から前記アンケート情報を呼び出して前記出力部に表示させ、
    前記評価工程は、前記属性情報取得工程を通じて取得した、前記対象者によるアンケート情報に対する複数の回答データを前記各適合情報カテゴリーごとに割り振ってそれぞれのカテゴリーのデータ件数を集計し、
     前記適合情報カテゴリー特定工程は、前記記憶部における前記適合情報カテゴリーを前記回答データ件数の多い順にランク付けして特定するように構成された請求項7~10のいずれか1項に記載の情報提供方法。
    The storage unit further includes questionnaire information associated with each matching information category divided by two predetermined tendencies orthogonal to each other by the face part attribute reference value, and the target person selects a questionnaire If so, the questionnaire information is called from the storage unit and displayed on the output unit,
    In the evaluation step, the plurality of response data for the questionnaire information obtained by the subject acquired through the attribute information acquisition step is allocated to each conforming information category, and the number of data in each category is totaled.
    The information provision according to any one of claims 7 to 10, wherein the conformity information category identification step is configured to rank and identify the conformance information categories in the storage unit in descending order of the number of response data items. Method.
  12.  前記適合情報カテゴリー特定工程は、前記対象者によって前記選択顔部位若しくは当該選択顔部位についての顔部位属性が複数選択された場合、又は前記顔部位属性とアンケート情報との組み合わせが選択された場合に、前記各顔部位属性に規定され直線性を示す少なくとも1つの傾向性をそれぞれ前記記憶部から呼び出すとともに、前記各対比結果に基づいて当該各傾向性に関連づけられた少なくとも1つの適合情報カテゴリーをそれぞれ特定し、前記複数の対象者情報群に対し、顔部位属性基準値を合致させて当該各適合情報カテゴリーを重畳させて重複するものに絞り込んで適合情報カテゴリーとして特定するように構成されてなる請求項7~11のいずれか1項に記載の情報提供方法。 The matching information category specifying step is performed when the target person selects a plurality of face part attributes for the selected face part or the selected face part, or when a combination of the face part attribute and questionnaire information is selected. , Calling at least one tendency defined by each face part attribute and indicating linearity from the storage unit, and at least one matching information category associated with each tendency based on each comparison result Identifying the plurality of target person information groups, matching the face part attribute reference value and superimposing each relevant information category, narrowing down to overlapping information, and specifying as a relevant information category Item 12. The method for providing information according to any one of Items 7 to 11.
  13.  請求項7~12のいずれか1項に記載の情報提供方法をコンピュータに実効するためのプログラム。 A program for executing the information providing method according to any one of claims 7 to 12 on a computer.
PCT/JP2015/085022 2015-12-15 2015-12-15 Information provision device and information provision method WO2017103985A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/JP2015/085022 WO2017103985A1 (en) 2015-12-15 2015-12-15 Information provision device and information provision method
JP2016543241A JP6028188B1 (en) 2015-12-15 2015-12-15 Information providing apparatus and information providing method
CN201580084856.4A CN108292418B (en) 2015-12-15 2015-12-15 Information providing device and information providing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/085022 WO2017103985A1 (en) 2015-12-15 2015-12-15 Information provision device and information provision method

Publications (1)

Publication Number Publication Date
WO2017103985A1 true WO2017103985A1 (en) 2017-06-22

Family

ID=57326660

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/085022 WO2017103985A1 (en) 2015-12-15 2015-12-15 Information provision device and information provision method

Country Status (3)

Country Link
JP (1) JP6028188B1 (en)
CN (1) CN108292418B (en)
WO (1) WO2017103985A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112749634A (en) * 2020-12-28 2021-05-04 广州星际悦动股份有限公司 Control method and device based on beauty equipment and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11265443A (en) * 1997-12-15 1999-09-28 Kao Corp Impression evaluation method and device
JP2001346627A (en) * 2000-06-07 2001-12-18 Kao Corp Make-up advice system
JP2002132916A (en) * 2000-10-26 2002-05-10 Kao Corp Method for providing advice on makeup
JP2006024203A (en) * 2004-06-10 2006-01-26 Miyuki Iino Coordinate support system
JP2010140100A (en) * 2008-12-09 2010-06-24 Yurakusha:Kk Face pattern analysis system
JP2013501292A (en) * 2009-08-04 2013-01-10 ヴェサリス Image processing method and image processing apparatus for correcting target image with respect to reference image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5347549B2 (en) * 2009-02-13 2013-11-20 ソニー株式会社 Information processing apparatus and information processing method
JP5792985B2 (en) * 2011-04-20 2015-10-14 キヤノン株式会社 Information processing apparatus, information processing apparatus control method, and program
JP5319829B1 (en) * 2012-07-31 2013-10-16 楽天株式会社 Information processing apparatus, information processing method, and information processing program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11265443A (en) * 1997-12-15 1999-09-28 Kao Corp Impression evaluation method and device
JP2001346627A (en) * 2000-06-07 2001-12-18 Kao Corp Make-up advice system
JP2002132916A (en) * 2000-10-26 2002-05-10 Kao Corp Method for providing advice on makeup
JP2006024203A (en) * 2004-06-10 2006-01-26 Miyuki Iino Coordinate support system
JP2010140100A (en) * 2008-12-09 2010-06-24 Yurakusha:Kk Face pattern analysis system
JP2013501292A (en) * 2009-08-04 2013-01-10 ヴェサリス Image processing method and image processing apparatus for correcting target image with respect to reference image

Also Published As

Publication number Publication date
JP6028188B1 (en) 2016-11-16
CN108292418B (en) 2022-04-26
JPWO2017103985A1 (en) 2017-12-14
CN108292418A (en) 2018-07-17

Similar Documents

Publication Publication Date Title
JP6715152B2 (en) Care information acquisition method, care information sharing method and electronic device for these methods
JP5290585B2 (en) Skin color evaluation method, skin color evaluation device, skin color evaluation program, and recording medium on which the program is recorded
US9760935B2 (en) Method, system and computer program product for generating recommendations for products and treatments
US10255482B2 (en) Interactive display for facial skin monitoring
US9445087B2 (en) Systems, devices, and methods for providing products and consultations
US9563975B2 (en) Makeup support apparatus and method for supporting makeup
JP6128309B2 (en) Makeup support device, makeup support method, and makeup support program
JP4683200B2 (en) Automatic hair region extraction method
WO2018076622A1 (en) Image processing method and device, and terminal
JP2012113747A (en) Makeup simulation system, makeup simulation device, makeup simulation method and makeup simulation program
KR20100110793A (en) Makeup method, makeup simulation device, adn makeup simulation program
JP2010017360A (en) Game device, game control method, game control program, and recording medium recording the program
JP2009082338A (en) Skin discrimination method using entropy
JP3920747B2 (en) Image processing device
JP6028188B1 (en) Information providing apparatus and information providing method
JP6165187B2 (en) Makeup evaluation method, makeup evaluation system, and makeup product recommendation method
JP6128356B2 (en) Makeup support device and makeup support method
JP4372494B2 (en) Image processing apparatus, image processing method, program, and recording medium
JP6209298B1 (en) Information providing apparatus and information providing method
JP6128357B2 (en) Makeup support device and makeup support method
JP2004326488A (en) Simulation image producing server, simulation image producing system, simulation image producing method and program
JP2017016418A (en) Hairstyle proposal system
JP2019107071A (en) Makeup advice method
JP2024500224A (en) Method and apparatus for hair styling analysis
TW201028963A (en) Evaluation method of skin color, evaluation apparatus of skin color, evaluation program of skin color and recording media thereof

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2016543241

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15910676

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15910676

Country of ref document: EP

Kind code of ref document: A1