WO2020115922A1 - Beauty promotion device, beauty promotion system, beauty promotion method, and beauty promotion program - Google Patents

Beauty promotion device, beauty promotion system, beauty promotion method, and beauty promotion program Download PDF

Info

Publication number
WO2020115922A1
WO2020115922A1 PCT/JP2019/012588 JP2019012588W WO2020115922A1 WO 2020115922 A1 WO2020115922 A1 WO 2020115922A1 JP 2019012588 W JP2019012588 W JP 2019012588W WO 2020115922 A1 WO2020115922 A1 WO 2020115922A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
face
unit
feature amount
vertex
Prior art date
Application number
PCT/JP2019/012588
Other languages
French (fr)
Japanese (ja)
Inventor
佐藤 達也
えな 鳴海
Original Assignee
B-by-C株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by B-by-C株式会社 filed Critical B-by-C株式会社
Priority to JP2019557510A priority Critical patent/JP6710883B1/en
Publication of WO2020115922A1 publication Critical patent/WO2020115922A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a beauty promotion device, a beauty promotion system, a beauty promotion method, and a beauty promotion program.
  • Patent Document 1 discloses a beauty promotion system that evaluates the condition of the user's skin or scalp from the imaged data of the user.
  • the conventional beauty promotion system evaluates the local condition as the condition of the user's skin or scalp, and does not evaluate the change in the proportion of the face due to deterioration over time. Therefore, there is room for improvement in quantitatively evaluating the proportions of the face and contributing to promoting beauty.
  • an object of the present invention is to provide a beauty promotion device that can quantitatively evaluate changes in facial proportions due to aging deterioration and improvement measures, and contribute to beauty promotion.
  • the beauty promotion device determines at least one fixed point, which is specified depending on the skeleton of the face, and facial muscles and fat from the imaged data of the face of the user.
  • a vertex recognition unit that recognizes the position of each of the at least one movable point that is specified in a dependent manner;
  • a geometric shape definition unit that defines a geometric shape that includes the vertices whose positions are recognized by the vertex recognition unit in the peripheral portion;
  • a feature amount calculation unit that calculates a feature amount indicating the length or area of the shape, a feature amount calculated by the feature amount calculation unit, and a reference value known as a value corresponding to the geometric shape. And are equipped with.
  • the beauty promotion device specifies two fixed points that are specified depending on the skeleton of the face and the muscles and fat of the face, which are specified from the imaged data of the user's face.
  • the apex recognition unit that recognizes the position of each of the two movable points, and the area demarcation unit that demarcates a demarcated area in a triangular shape by the line that connects the vertices whose positions are recognized by the apex recognition unit
  • An area calculation unit that calculates the area, and an area comparison unit that compares the area of the demarcated region calculated by the area calculation unit with a reference area known as the area of the region corresponding to the demarcated region.
  • the area comparison unit may include a display processing unit that outputs information indicating a comparison result obtained by comparing the area of the demarcated region and the reference area.
  • the area demarcation unit may demarcate a pair of left and right demarcation areas based on the midline of the face.
  • the vertex recognition unit may recognize each vertex specified by the deep nose point and the temple vertex as two fixed points, and may recognize the vertex on the cheek as one movable point.
  • the vertex recognition unit evaluates the imaged data three-dimensionally, recognizes the most recessed part of the nose root of the face as a deep nose point, and the most recessed part of the temple part of the face. May be recognized as the top of the cheek of the face, and the highest part of the upper part of the cheek of the face near the vertical line outside the pupil may be recognized as the top of the cheek.
  • the area demarcation unit may demarcate two types of demarcated areas at intervals in the vertical direction of the face.
  • the vertex recognition unit may recognize each vertex specified by the nose point and the ear point as two fixed points, and the cheek point as one movable point.
  • the vertex recognition unit three-dimensionally evaluates the imaged data, recognizes the most recessed part of the lower nose of the face as the subnasal point, and selects the part of the face located below the ear.
  • the most recessed part is recognized as the inferior ear point, and in the lower part of the cheek of the face, near the vertical line outside the pupil, in the lateral bulge of the corner of the mouth, the most raised part is recognized as the apex of the lower cheek. Good.
  • the area comparison unit may use, as the reference area, the area of the demarcated region in the user at a time point before a certain period from the time when the imaging data was captured.
  • the area comparison unit may use the area of the demarcated area in the ideal model as the face desired by the user as the reference area.
  • a beauty promotion system provides a beauty promotion device according to any one of the above-described beauty promotion devices, an imaging unit that captures a user's face, and imaging data captured by the imaging unit.
  • the imaging device further includes a transmitting unit that transmits the image capturing data
  • the beauty promotion device further includes a receiving unit that receives the image capturing data.
  • the beauty promotion system includes the device-side display unit in which the image pickup unit is built-in and the display surface is mirror-finished, and the device-side display unit images the face of the user facing the display surface. It may be a smart mirror capable of displaying image pickup data on the display surface.
  • the beauty promotion method provides two fixed points, which are specified depending on the skeleton of the face, from the imaging data of the user's face, and muscles and fat of the face.
  • a demarcation step, an area calculation step of calculating the area of the demarcation area, an area comparison step of comparing the area of the demarcation area calculated by the area calculation step, and a reference area known as the area of the area corresponding to the demarcation area. , Are provided.
  • a beauty promotion program causes a computer to detect two fixed points, which are specified depending on the skeleton of the face, from the imaged data of the face of the user and the face.
  • a triangular delimited area is defined by a vertex recognition function that recognizes the position of each movable point that is specified depending on muscle and fat, and a straight line that connects the vertices whose positions are recognized by the vertex recognition function.
  • the apex recognition unit is specified from the image pickup data of the face of the user, two fixed points that are specified depending on the skeleton of the face, and the muscles and fat of the face. The position of each movable point is recognized.
  • the area demarcation unit demarcates the demarcated area having a triangular shape by the straight line connecting the specified vertices, and the area calculation unit calculates the area of the demarcated area.
  • the area comparison unit compares the area of the demarcated region with the reference area. This makes it possible to quantitatively evaluate changes in the proportions of the face due to aging deterioration and improvement measures, and contribute to promoting beauty.
  • FIG. 3 is a diagram showing an example of each vertex recognized by the vertex recognition unit shown in FIG. 2, and is a (a) front view and (b) side view of imaging data.
  • FIG. 3 is a diagram showing an example of each vertex recognized by the vertex recognition unit shown in FIG. 2, and is a (a) front view and (b) side view of imaging data.
  • FIG. 3 is a diagram showing an example of each vertex recognized by the vertex recognition unit shown in FIG. 2, and is a (a) front view and (b) side view of imaging data.
  • FIG. 3 is a diagram showing an example of each vertex recognized by the vertex recognition unit shown in FIG. 2, and is a (a) front view and (b) side view of imaging data.
  • FIG. 2 is a block diagram which shows the structural example of the portable terminal shown in FIG.
  • FIG. is a figure which shows the processing flow in a beauty promotion apparatus.
  • FIG. 3 shows an example of the process which the vertex recognition part recognizes the vertex on the cheek
  • modification 3 it is a figure which shows an example of each vertex recognized by the vertex recognition part, and is a (a) front view and (b) side view of imaging data. It is a figure explaining the use condition of the beauty promotion system concerning a 2nd embodiment of the present invention. It is a block diagram which shows the structural example of the beauty promotion apparatus which concerns on 2nd Embodiment.
  • FIG. 1 is a schematic diagram showing a configuration example of a beauty promotion system 1 according to an embodiment of the present invention.
  • the beauty promotion system 1 performs data processing on the imaging data of the face of the user 50 and quantitatively evaluates the state of the face of the user 50 (for aging management). This is a beauty promotion system for deterioration management and countermeasures.
  • the beauty promotion system 1 includes an imaging device 10 and a beauty promotion device 30 that are connected to each other via a network 20.
  • the mobile terminal 40 of the user 50 is connected to the network 20.
  • the mobile terminal 40 of the user 50 does not have to be connected to the network 20.
  • the beauty promotion system 1 of the present invention shows the results of quantitatively evaluating the face of the user 50 by imaging the face of the user 50 in a store that provides beauty-related services such as a beauty salon. It is used to propose measures to be taken in the future to maintain and improve the condition.
  • the operation may be performed under the operation of the operator 60 of the store, or may be performed by the user 50 itself.
  • the image capturing device 10 is not particularly limited as long as it is a device that can capture the captured data by capturing the face of the user 50.
  • the image pickup device 10 includes an image pickup unit having an image pickup element such as a CMOS or a CCD.
  • the imaging device 10 also includes a transmission unit that transmits the imaging data captured by the imaging unit to the beauty promotion device.
  • the imaging data acquired by the imaging device 10 may be 2D data or 3D data.
  • the image capturing unit acquires image capturing data as 3D data. That is, although the image pickup apparatus 10 is illustrated in a simplified manner in FIG. 1, it may be, for example, a 3D camera in which a plurality of image pickup units are arranged at intervals, or one image pickup unit and a distance sensor are provided. The configuration may be changed.
  • the network 20 is a network for mutually connecting the imaging device 10, the beauty promotion device 30, and the mobile terminal 40, and is, for example, a wireless network or a wired network.
  • the network 20 includes a wireless LAN (WLAN), a wide area network (WAN), ISDNs (integrated service digital networks), wireless LANs, LTE (long term evolution, LED), and LTE (long term evolution-Led-ed).
  • the network 20 is not limited to these examples.
  • the public switched telephone network Public Switched Telephone network: PSTN
  • PSTN Public Switched Telephone network: PSTN
  • Bluetooth registered trademark
  • Bluetooth Low Energy optical line
  • ADSL Advanced Digital Subscriber Line
  • satellite communication network or the like, and may be any network.
  • the network 20 may be, for example, NB-IoT (Narrow Band IoT) or eMTC (enhanced Machine Type Communication).
  • NB-IoT and eMTC are wireless communication systems for IoT, and are networks capable of long-distance communication with low cost and low power consumption.
  • the network 20 may be a combination of these. Further, the network 20 may include a plurality of different networks combining these examples.
  • the network 20 may include an LTE wireless network and a wired network such as an intranet that is a closed network.
  • FIG. 2 is a block diagram showing a configuration example of the beauty promotion device 30.
  • the beauty promotion device 30 includes a device-side communication unit 31, a data storage unit 32, a device processing unit 33, and a device-side display unit 34.
  • the beauty promotion device 30 is an information processing device that analyzes the state of the face of the user 50 from the imaged data of the face of the user 50.
  • a personal computer is used in the present embodiment.
  • the device-side communication unit 31 is a communication interface that transmits and receives various data via the network 20.
  • Various types of data include image pickup data and data indicating a comparison result. That is, the device-side communication unit 31 functions as a reception unit that receives the imaging data transmitted from the transmission unit of the imaging device 10.
  • the data storage unit 32 has a function of storing various control programs necessary for the device processing unit 33 to operate and various data received by the device-side communication unit 31 from the outside.
  • the data storage unit 32 also stores at least one or more reference area data.
  • the data storage unit 32 is realized by various storage media such as HDD, SSD, and flash memory.
  • the device processing unit 33 By executing the control program stored in the data storage unit 32, the device processing unit 33 realizes each function to be realized as the beauty promotion system 1.
  • the functions referred to here include a vertex recognition function, a region demarcation function, an area calculation function, an area comparison function, and a result display function.
  • the device-side display unit 34 is a monitor device that displays the content of the operation of the beauty promotion device 30 and the result of the processing.
  • the device processing unit 33 is a computer that controls each unit of the beauty promotion device 30, and may be, for example, a central processing unit (CPU), a microprocessor, an ASIC, an FPGA, or the like.
  • the device processing unit 33 is not limited to these examples, and may be any device as long as it is a computer that controls each unit of the beauty promotion device 30.
  • the device processing unit 33 includes a vertex recognition unit 33A, a region demarcation unit 33B, an area calculation unit 33C, an area comparison unit 33D, and a display processing unit 33E.
  • the vertex recognition unit 33A recognizes the positions of the two fixed points Pf and the one movable point Pm from the imaged data of the face of the user 50.
  • the fixed point Pf is a vertex specified depending on the skeleton of the face. Since the fixed point Pf is specified depending on the skeleton of the face, the change in position over time is slight. Note that the meaning of "fixed” here does not mean that the position does not change at all, but means that the amount of change is extremely small as compared with the movable point Pm described later.
  • the movable point Pm is an apex that is specified depending on the muscles and fat of the face. For example, the muscles of the face become weaker with age or fat is attached to the face. Position changes. Further, the position of the movable point Pm is changed to the upper side by stimulating the muscles of the face to strengthen the muscles of the face or reduce the amount of fat on the face. Due to such a change in the position of the movable point Pm, the proportion of the face changes, which greatly affects the impression the face gives to the opponent.
  • each vertex recognized by the vertex recognition unit 33A in the present embodiment will be described with reference to FIG. 3A and 3B are diagrams showing the respective vertices recognized by the vertex recognition unit 33A, and are a front view and a side view of the imaged data. Note that this content is merely an example, and each vertex recognized by the vertex recognition unit 33A can be arbitrarily changed. That is, in consideration of the structure of the skeleton of the user 50, how to attach muscles, and the like, the vertex of the face that is easy to recognize can be used for evaluation.
  • the vertex recognition unit 33A recognizes two fixed points Pf and one movable point Pm for one demarcated area.
  • the two fixed points Pf the vertices identified by the deep nose point P1 and the temple apex P2 are recognized, and the vertex P3 on the cheek is recognized as one movable point Pm.
  • the deep nose point P1 is shared by the pair of left and right demarcated regions. A specific method of identifying each vertex will be described later.
  • the positions of the deep nose point P1 and the temple apex P2 in the vertical direction are equal to each other.
  • the apex P3 on the cheek is located below the deep nose P1 and the apex P2 of the temple.
  • the vertex recognition unit 33A recognizes each vertex specified by the inferior nose point P4 and the inferior ear point P5 as the two fixed points Pf, and recognizes the inferior cheek vertex P6 as one movable point Pm. ..
  • the nose point P4 is shared by a pair of left and right demarcated areas. A specific method for discriminating each vertex will be described later.
  • the vertical position of the inferior point P4 and the inferior point P5 are equal to each other.
  • the lower cheek apex P6 is located below the lower nose P4 and the lower ear P5.
  • a method of specifying absolute coordinates with respect to the spatial coordinates provided for the imaged data may be used, or any one of the three vertices defining the demarcated area may be used.
  • a method of specifying relative coordinates based on the above may be used.
  • the coordinate values are also three-dimensionally expressed.
  • the area demarcation unit 33B demarcates a demarcation area having a triangular shape by a straight line connecting the vertices whose positions are recognized by the vertex recognition unit 33A. Further, the area demarcation unit 33B demarcates a pair of left and right demarcation areas with the midline O1 of the face as a reference.
  • the demarcated area defined by the area demarcation unit 33B may be a two-dimensional area or a three-dimensional area. In this embodiment, the defined area is a three-dimensional area.
  • the area demarcation unit 33B demarcates two types of demarcated areas at intervals in the vertical direction of the face.
  • the demarcation region located on the upper side is referred to as the upper demarcation region A1
  • the demarcation region located on the lower side is referred to as the lower demarcation region A2. That is, the area defining portion 33B defines a pair of left and right upper defining areas A1 and lower defining areas A2.
  • the upper demarcating area A1 and the lower demarcating area A2 are vertically spaced to mean that the accuracy of evaluation is increased by the upper demarcating area A1 and the lower demarcating area A2 over the entire face in the up and down direction. Is. Therefore, there is no problem even if a part of the upper demarcation area A1 and the lower demarcation area A2 overlap each other.
  • the area calculator 33C calculates the area of the demarcated region. When calculating the area of the demarcated region, the area within the demarcated region is calculated using the coordinate data of each vertex specified by the region demarcating unit 33B.
  • the area comparison unit 33D compares the area of the defined area calculated by the area calculation unit 33C with a reference area known as the area of the area corresponding to the defined area.
  • the area comparison unit 33D can use, for example, as the reference area, the area of the demarcated region defined from the imaged data of the user 50 imaged a certain period before the imaged data is captured, that is, in the past. Further, the area comparison unit 33D can use the area of the demarcated region in the ideal model as the face desired by the user 50 as the reference area. As described above, the reference area can be arbitrarily set as long as it can be compared with the area of the demarcated region at the present time.
  • the ideal model is created using past imaging data. About 100 pieces of original data in which an ideal demarcated area is visually specified are prepared with respect to past imaging data. An ideal model can be created by performing a deep learning process using this original data.
  • the apex P3 on the cheek which is the movable point Pm
  • the apex P2 of the temple which are the fixed points Pf.
  • the lower cheek apex P6 which is the movable point Pm, is located below the lower nose P4 and the lower ear P5.
  • the area of the demarcated region is smaller than the reference area which is the area measured last time. If it is, the movable point Pm has moved upward. That is, it means that the proportion of the face is improved due to the strengthening of the muscles of the face or the decrease in the fat of the face.
  • the movable point Pm has moved to the lower side. That is, it means that the proportion of the face is deteriorated because the muscles of the face are weakened or the fat of the face is increased. In this way, the user 50 can quantitatively grasp whether the proportion of the face is improving or deteriorating by confirming the amount of change in the demarcated area.
  • the position of the movable point Pm is arranged below the position of the fixed point Pf in each of the upper demarcated area A1 and the lower demarcated area A2 has been described. It is not limited to the mode.
  • the position of the movable point Pm may be located above the position of the fixed point Pf.
  • the result of comparing the area of the demarcated area with the reference area is the opposite of the above description.
  • the proportion of the face is heading for improvement, and when the area of the demarcated region becomes smaller than the reference area, the proportion of the face becomes worse. You're heading.
  • the area of the demarcated region in the ideal model as the face desired by the user 50 is set as the reference area, it is possible to know whether the proportion of the face is improved by checking how close the reference area is. can do.
  • the display processing unit 33E causes the area comparison unit 33D to display the comparison result obtained by comparing the area of the demarcated region with the reference area on the device-side display unit 34 and the terminal-side display unit 45 of the mobile terminal 40, which will be described later.
  • a specific example of the display content displayed by the display processing unit 33E will be described later.
  • FIG. 4 is a block diagram showing a configuration example of the mobile terminal 40.
  • the mobile terminal 40 includes a terminal side communication unit 41, a terminal storage unit 42, a terminal processing unit 43, a camera 44, and a terminal side display unit 45.
  • the mobile terminal 40 is a terminal device that a user carries and uses, such as a so-called smartphone or tablet.
  • the terminal-side communication unit 41 is a communication interface that transmits and receives various data via the network 20.
  • Various types of data include image pickup data and data indicating a comparison result. That is, the terminal-side communication unit 41 receives various types of information from the beauty promotion device 30.
  • the terminal storage unit 42 has a function of storing various control programs and various data necessary for the terminal processing unit 43 to operate.
  • the terminal storage unit 42 is realized by various storage media such as HDD, SSD, and flash memory.
  • the terminal processing unit 43 may realize at least a part of each function to be realized as the beauty promotion system 1.
  • the terminal processing unit 43 is a computer that controls each unit of the mobile terminal 40, and may be, for example, a central processing unit (CPU), a microprocessor, an ASIC, an FPGA, or the like.
  • the terminal processing unit 43 is not limited to these examples, and may be any computer as long as it controls each unit of the mobile terminal 40.
  • the terminal processing unit 43 includes a reception unit 43A.
  • the reception unit 43A receives the imaging data and the comparison result transmitted from the beauty promotion device 30, and displays them on the terminal side display unit 45.
  • the camera 44 can take an image by the operation of the user 50.
  • the imaging data may be acquired by the camera 44 of the mobile terminal 40 and transmitted to the beauty promotion device 30.
  • the terminal-side display unit 45 is a monitor device that displays information indicating the comparison result processed by the beauty promotion device 30.
  • the terminal side display unit 45 can display the imaging data together with the comparison result.
  • FIG. 5 is a diagram showing a processing flow in the beauty promotion system 1
  • FIG. 6 is a schematic diagram in processing in which the vertex recognition unit 33A recognizes the vertex P3 on the cheek.
  • the imaging device 10 performs an imaging step (S501) of imaging the face of the user 50.
  • S501 an imaging step
  • the imaging device 10 acquires 3D data.
  • 3D data can be acquired by capturing an image with a 3D camera.
  • 33 A of vertex recognition parts perform the vertex recognition step (S502) which recognizes each vertex using the imaging data transmitted from the imaging device 10.
  • the vertex recognition step the positions of the two fixed points Pf and the one movable point Pm are recognized as three vertices forming one demarcated area.
  • the vertex recognition unit 33A three-dimensionally evaluates the imaged data and recognizes each vertex.
  • the deep nose point P1 that forms one fixed point Pf among the three vertices that form the upper demarcation area A1 the most recessed part of the nose root of the face is specified and recognized as the deep nose point P1.
  • the apex P2 of the temple that forms the other fixed point Pf
  • the most depressed part of the temple part of the face is recognized as the apex P2 of the temple.
  • the apex P2 of the temple may be a portion through which a straight line connecting the deep nose point P1 and the center of the pupil or the inner canthus of the outer end portion of the face in the left-right direction in front view passes.
  • the highest portion of the upper cheek of the face near the vertical line outside the pupil is recognized as the apex P3 on the cheek. ..
  • the most raised portion may be recognized as the apex P3 on the cheek.
  • the deepest part of the nose of the face is It is recognized as the point P4 below the nose.
  • the inferior ear point P5 that forms the other fixed point Pf the most recessed portion of the portion of the face located under the ear is recognized as the inferior ear point P5.
  • the lower cheek apex P6 forming the movable point Pm is the lowermost portion of the cheek of the face, which is the most raised portion in the lateral bulge of the corner of the mouth near the vertical line outside the pupil. It is recognized as the apex P6 under the cheek.
  • the contour line may be projected on the imaging data to recognize the most raised portion as the apex P6 under the cheek.
  • each defined region Image processing may be performed so as to identify each vertex forming the.
  • the position of each vertex may be specified by superimposing the latest image pickup data on the past image pickup data.
  • the operator 60 may specify the position of each vertex by selecting an appropriate location as each vertex on the imaged data.
  • the area demarcating unit 33B performs an area demarcating step (S503) of demarcating a demarcated area by using each vertex data identified by the vertex recognition step.
  • a triangle defining area is defined by a straight line connecting the respective vertices.
  • the area calculation unit 33C performs an area calculation step (S504) of calculating the area of the demarcated area demarcated in the area demarcation step.
  • the area calculation step the area of the demarcated region is calculated using the coordinate data of each vertex.
  • the area comparison unit 33D performs an area comparison step (S505) of comparing the area of the demarcated region calculated in the area calculation step with the reference area.
  • the area comparison step the area of the demarcated area is compared with a reference area known as the area of the area corresponding to the demarcated area.
  • the area of the demarcated region obtained from the past measurement results is set as the reference area.
  • the display processing unit 33E performs a display processing step (S506) of outputting information indicating the comparison result.
  • the comparison result between the area of the demarcated region compared by the area comparison unit 33D and the reference area is displayed on the device-side display unit 34 and the terminal-side display unit 45.
  • the comparison result may include information about the result of this time and information that suggests a measure (a facial massage or the like) that the user 50 will work on in the future.
  • the comparison result may not be displayed on the terminal side display unit 45.
  • FIG. 7 is a diagram showing an example of display contents by the display processing unit 33E, which is (a) imaging data two months ago, and (b) imaging data at the time of evaluation.
  • FIG. 8 is a diagram showing another example of the display content by the display processing unit 33E, which is (a) imaging data two months ago, and (b) imaging data at the time of evaluation.
  • the area of the upper demarcation area A1 is reduced by about 23% and the area of the lower demarcation area A2 is reduced by about 33% as compared with two months ago.
  • youthful and plump feeling appears, and it is recognized that the appearance impression is improved.
  • the area of the upper demarcated area A1 is reduced by about 21.5% and the area of the lower demarcated area A2 is reduced by about 25% compared to two months ago. It is recognized that this gives a youthful and plump feeling and improves the visual impression. As a result, it has a well-balanced and gentle look, and it is recognized that the appearance impression is improved.
  • the vertex recognition unit 33A determines two fixed points that are specified depending on the skeleton of the face from the imaged data of the face of the user 50. The position of each of the one movable point Pm specified depending on Pf and the muscle and fat of the face is recognized.
  • the area demarcation unit 33B demarcates a demarcated area having a triangular shape by the straight line connecting the specified vertices, and the area calculation unit 33C calculates the area of the demarcated area. Then, the area comparison unit 33D compares the area of the demarcated region with the reference area. This makes it possible to quantitatively evaluate changes in the proportions of the face due to aging deterioration and improvement measures, and contribute to promoting beauty.
  • the demarcated area is demarcated by the two fixed points Pf and the one movable point Pm, for example, the recognition is performed in comparison with the configuration in which the demarcated area is demarcated by the two or three movable points Pm. It is possible to suppress a specific variation in the position of the difficult movable point Pm and perform accurate evaluation.
  • the amount of change can be increased by increasing the numerical value to be handled, compared with the case where the position of the movable point Pm is evaluated by the distance from the fixed point Pf.
  • the user 50 determines the degree of the change in the face. It becomes easier to recognize and can be motivated to promote beauty.
  • the beauty promotion system 1 evaluates changes in muscles and fats due to aging deterioration that are likely to be improved by self-care without depending only on the skeleton, so that the user's motivation for beauty can be increased. ..
  • the beauty promotion system 1 includes the display processing unit 33E that outputs information indicating a comparison result obtained by comparing the area of the demarcated region with the reference area. Therefore, the result of the quantitative evaluation is, for example, carried by the user 50. By displaying the result on the terminal 40, the evaluation result can be easily confirmed.
  • the area demarcation unit 33B demarcates a pair of demarcated areas on the left and right with reference to the median line O1 of the face, it is possible to promote beauty for the realization of a proportion of the face that is proportionate to the left and right.
  • the area demarcation unit 33B demarcates the demarcation area by the two fixed points Pf specified by the deep nose point P1 and the apex P2 of the temple, and the one movable point Pm specified by the apex P3 on the cheek. .. Therefore, the proportions around the upper part of the cheek of the face are quantitatively evaluated, and for example, sagging of the upper part of the cheek that tends to be a concern with aging (for example, the Golgo line formed between the roche line and the cheekbone). You can check the change of.
  • the vertex recognition unit 33A three-dimensionally evaluates the imaging data, and recognizes the deep nose point P1, the temple vertex P2, and the cheek vertex P3. Therefore, each vertex can be easily recognized regardless of the modeling of the face of the user 50.
  • the area demarcation unit 33B demarcates two types of demarcated areas at intervals in the vertical direction of the face, the proportions of the entire face can be quantitatively evaluated by evaluating the upper side and the lower side of the face, respectively. It is possible to evaluate, and it is possible to more effectively promote beauty.
  • the area demarcation unit 33B demarcates the demarcation area by the two fixed points Pf specified by the inferior nose point P4 and the inferior ear point P5, and the one movable point Pm specified by the apex P6 of the inferior cheek. Therefore, it is possible to quantitatively evaluate the proportions around the lower part of the cheek of the face, and for example, to confirm the change in the sagging of the lower part of the cheek, which tends to be a concern with aging.
  • vertex recognition unit 33A three-dimensionally evaluates the imaging data and recognizes the nose point P4, the ear point P5, and the cheek vertex P6, regardless of the facial shaping of the user 50, Each vertex can be easily recognized.
  • the area comparison unit 33D uses, as the reference area, the area of the demarcated region of the user 50 at a time point before a certain period from the time when the image data is captured, the proportion of the face changes with the passage of time. Can be quantitatively evaluated. Thereby, the beauty effect can be accurately grasped.
  • the area comparison unit 33D uses the area of the demarcated region in the ideal model as the face desired by the user 50 as the reference area, it is possible to quantitatively confirm how close the target is. .. As a result, the motivation of the user 50 for beauty can be maintained and effective beauty promotion can be performed.
  • the beauty promotion system 1 includes the beauty promotion device 30 and the image pickup device 10 for picking up an image of the face. Therefore, the image pickup data of the face of the user 50 can be easily acquired and the image is picked up by the beauty promotion device 30. Data can be evaluated.
  • FIG. 9 is a block diagram which shows the structural example of the beauty promotion apparatus 30 which concerns on a modification.
  • FIG. 10 is a diagram showing an example of each apex recognized by the apex recognizing unit 33F in Modification Example 1.
  • FIG. 10(a) is a front view of imaging data
  • FIG. 10(b) is a drawing of imaging data. It is a side view.
  • the number of vertices recognized by the vertex recognition unit 33F and the geometric shape recognized based on the number are different from those in the first embodiment. As shown in FIG. 9 and FIG.
  • the vertex recognition unit 33F detects the left and right sides of the face in the upper half of the face, centering on the midline, from the imaged data of the face of the user. Each recognizes the position of one fixed point (P1) and one movable point (P3).
  • the geometric shape defining unit 33G defines a geometric shape including the respective vertices whose positions are recognized by the vertex recognizing unit 33F in the peripheral portion.
  • a straight line L1 is defined as a geometric shape that is included in the peripheral portion in the upper half of the face, centered on the midline, and on the left side and the right side of the face, respectively.
  • the geometric shape defining portion 33G exhibits the same function as the area defining portion 33B in the first embodiment.
  • the geometric shape defining portion 33G defines a pair of left and right geometric shapes based on the midline of the face. That is, the geometric shape defining portion 33G defines a pair of left and right straight lines L1.
  • the vertex recognition unit 33F determines one fixed point (P4) and one fixed point (P4) on the left half and the right half of the face in the lower half of the face from the imaged data of the face of the user. The position of each movable point (P6) is recognized. Then, the geometric shape defining portion 33G defines two types of geometric shapes at intervals in the vertical direction of the face. That is, the geometric shape defining portion 33G defines the straight line L2 as the geometric shape included in the peripheral portion in the lower half of the face, centering on the midline and on the left side and the right side of the face, respectively. The geometric shape defining portion 33G defines a pair of left and right straight lines L2.
  • the feature amount calculation unit 33H calculates the feature amount indicating the length of the geometric shape.
  • the feature amount calculation unit 33H exhibits the same function as the area calculation unit 33C in the first embodiment. That is, the feature amount calculation unit 33H calculates the length of the straight line when the geometric shape is a straight line, and calculates the area of the straight line when the geometric shape is a figure. In the illustrated example, since the geometric shapes are the straight lines L1 and L2, the respective lengths are calculated.
  • the feature amount comparison unit 33I compares the feature amount calculated by the feature amount calculation unit 33H with a reference value known as a value corresponding to the geometric shape. That is, the feature amount comparison unit 33I exhibits the same function as the area comparison unit 33D in the first embodiment.
  • the data storage unit 32 stores a reference value instead of the reference area stored in the first embodiment.
  • the reference value is a concept obtained by expanding the concept of the reference area in the first embodiment to a straight line, and indicates the length of the straight line or the area of the region.
  • the feature amount comparison unit 33I uses the feature amount of the user at a time point before a fixed period from the time of capturing the imaged data as a known reference value or an ideal face as the user desires.
  • the feature amount in the model can be used as a known reference value.
  • the display processing unit 33E displays the result on the device-side display unit 34 and the terminal-side display unit 45.
  • FIG. 11 is a diagram showing a processing flow in the beauty promotion device 30 according to the modification.
  • the imaging device 10 performs an imaging step (S601) of imaging the face of the user 50.
  • the vertex recognition unit 33F performs a vertex recognition step (S602) of recognizing each vertex by using the imaging data transmitted from the imaging device 10.
  • the geometric shape defining unit 33G performs a geometric shape defining step (S603) of defining a geometric shape using each vertex data identified in the vertex recognizing step.
  • the geometric shape defining portion 33G defines the straight lines L1 and L2 in left and right pairs.
  • the feature amount calculation unit 33H performs a feature amount calculation step (S604) of calculating the length of the geometric shape defined by the geometric shape definition step.
  • the lengths of the straight lines L1 and L2, which are geometric shapes, are calculated using the coordinate data of each vertex.
  • the feature amount comparison unit 33I performs a feature amount comparison step (S605) of comparing the feature amount of the geometric shape calculated by the feature amount calculation step with a known reference value.
  • the feature quantity comparison step the value of the feature quantity is compared with a known reference value.
  • the lengths of the straight lines L1 and L2 obtained from the past measurement results are set as the reference value.
  • the display processing unit 33E performs a display processing step (S606) of outputting information indicating the comparison result.
  • the comparison result between the feature amount of the geometrical shape compared by the feature amount comparing unit 33I and the reference value is displayed on the device side display unit 34 and the terminal side display unit 45.
  • the comparison result may include information about the result of this time and information that suggests a measure (a facial massage or the like) that the user 50 will work on in the future.
  • the comparison result may not be displayed on the terminal side display unit 45. Then, by making such a comparison, it is possible to quantitatively evaluate the change in the proportion of the face due to aging deterioration and improvement measures, and to contribute to the promotion of beauty, as in the first embodiment described above.
  • FIG. 12 is a diagram showing an example of each apex recognized by the apex recognizing unit 33F in Modification Example 2.
  • FIG. 12(a) is a front view of imaging data
  • FIG. 12(b) is a drawing of imaging data. It is a side view.
  • the vertex recognition unit 33F extracts four fixed points (P1, P2, P4, P5) and one movable point (P6) from the imaged data of the user's face. Recognize the position of. Then, the geometric shape defining unit 33G defines a geometric shape including the respective vertices whose positions are recognized by the vertex recognizing unit 33F in the peripheral portion.
  • the area of a pentagon A1 connecting the vertices is defined as the geometric shape.
  • a pair of left and right pentagons A1 are defined from the entire face.
  • the feature amount calculation unit 33H calculates the area of the pentagon A1 as a geometric shape as the calculation of the feature amount. That is, the feature amount calculation unit 33H calculates the length as the feature amount when the geometric shape is a straight line and the area as the feature amount when the geometric shape is a figure. Then, the feature amount comparison unit 33I compares the feature amount calculated by the feature amount calculation unit 33H with a reference value known as a value corresponding to the geometric shape.
  • the configuration has one movable point, but the vertex recognition unit 33F may recognize two or more movable points when defining the pentagon A1.
  • the geometric shape defining portion 33G may define different pentagons in the upper half and the lower half of the face.
  • FIG. 13A and 13B are diagrams showing an example of each vertex recognized by the vertex recognition unit 33F in Modification 3, where FIG. 13A is a front view of the captured data and FIG. 13B is a diagram of the captured data. It is a side view.
  • the vertex recognition unit 33F determines the positions of two fixed points (P2, P5) and one movable point (P6) from the imaged data of the user's face. recognize. Then, the geometric shape defining portion 33G defines the area of the circle C1 as a geometric shape including the respective vertices whose positions are recognized by the vertex recognizing portion 33F in the peripheral portion. The geometric shape defining portion 33G defines a pair of left and right circles C1 from the entire face.
  • the feature amount calculation unit 33H calculates the area of the circle, and the feature amount comparison unit 33I compares the feature amount calculated by the feature amount calculation unit 33H with a reference value known as a value corresponding to the geometric shape.
  • a reference value known as a value corresponding to the geometric shape.
  • the vertex recognition unit 33F may recognize two movable points when defining the circle C1.
  • the geometric shape defining portion 33G may define different circles for the upper half and the lower half of the face.
  • the geometric shape to be evaluated may be a straight line, a circle, or a polygon.
  • FIG. 14 is a figure explaining the use condition of the beauty promotion system 1B which concerns on 2nd Embodiment of this invention.
  • FIG. 15 is a block diagram showing a configuration example of the beauty promotion device 30B according to the second embodiment.
  • the overall configuration of the device is different from that of the first embodiment.
  • the differences from the first embodiment will be described, and description of common configurations and common effects will be omitted.
  • the beauty promotion system 1B is a smart mirror that has both a mirror function and a display function by displaying the captured content on a mirror-finished display surface.
  • the imaging device 10 having an imaging unit is built in the beauty promotion device 30B that constitutes the beauty promotion system 1B.
  • the image capturing section of the image capturing apparatus 10 captures an image of the front surface on which the user is located, from the upper portion of the display surface of the apparatus side display section 34.
  • the display surface of the device-side display unit 34 of the beauty promotion device 30B is mirror-finished.
  • the device-side display unit 34 can be used as a mirror.
  • the device-side display unit 34 has a function of displaying on the display surface the imaged data of the face of the user facing the display surface. Therefore, the user can confirm the imaged data of the user's face imaged by the imaging device 10 and the evaluation result of the change in the proportion performed by the device processing unit 33 using the imaged data by looking at the mirror. .. As a result, it is possible to confirm the change in the proportions of the face at the timing when the user adjusts his appearance, and it is possible to ensure the convenience of the user.
  • one side of the left and right half of the face may be used as a mirror and the image data may be displayed on the other side, or the front side may be used as a mirror while the display surface is being displayed.
  • the evaluation result may be overwritten. Further, the data processed into the ideal face proportions may be displayed in an overlapping manner.
  • the beauty promotion system 1 is not limited to the above embodiment and may be realized by another method.
  • the control program of the above embodiment may be provided in a state of being stored in a computer-readable storage medium.
  • the storage medium can store the control program in a “non-transitory tangible medium”.
  • Storage media may include any suitable storage media such as HDD or SDD, or any suitable combination of two or more thereof.
  • the storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile.
  • the storage medium is not limited to these examples, and may be any device or medium as long as it can store the control program.
  • the beauty promotion system 1 can realize each function shown in the embodiment by, for example, reading a control program stored in a storage medium and executing the read control program.
  • the control program may be an application program installed in the beauty promotion system 1 via an arbitrary transmission medium (communication network, broadcast wave, etc.).
  • the beauty promotion system 1 realizes the functions of the plurality of functional units shown in the respective embodiments by executing a control program downloaded via the Internet or the like, for example.
  • the control program is implemented by using a script language such as ActionScript or JavaScript (registered trademark), an object-oriented programming language such as Objective-C or Java (registered trademark), or a markup language such as HTML5. Good.
  • a script language such as ActionScript or JavaScript (registered trademark)
  • object-oriented programming language such as Objective-C or Java (registered trademark)
  • markup language such as HTML5. Good.
  • At least a part of the processing in the beauty promotion system 1 may be realized by cloud computing configured by one or more computers. Further, each functional unit of the beauty promotion system 1 may be realized by one or a plurality of circuits that realize the functions shown in the above embodiments, and one circuit may realize the functions of a plurality of functional units. Good.
  • the computer determines, from the imaged data of the user's face, at least one fixed point that is specified depending on the skeleton of the face and the muscle and fat of the face.
  • the vertex recognition step recognizes two fixed points and one movable point
  • the geometric shape defining step uses a straight line connecting the respective vertices whose positions are recognized by the vertex recognition step.
  • the feature amount calculating step is an area calculating step of calculating an area of the defined region
  • the feature amount comparing step is calculated by the area calculating step.
  • the area comparison step of comparing the area of the defined area with a reference area known as the area of the area corresponding to the defined area may be performed.
  • the beauty promotion program of the present invention causes a computer to specify, from image data obtained by imaging a user's face, at least one fixed point that is specified depending on the skeleton of the face and the muscle and fat of the face.
  • a vertex recognition function for recognizing the position of each one of the movable points, a geometric shape defining function for determining a geometric shape including the vertices whose positions are recognized by the vertex recognition function in a peripheral portion, and a length of the geometric shape.
  • Feature amount calculating function for calculating a feature amount indicating a size or area, a feature amount comparing function for comparing the feature amount calculated by the feature amount calculating function, and a feature amount known as a value corresponding to the geometric shape. And realize.
  • the vertex recognition function recognizes two fixed points and one movable point
  • the geometric shape defining function uses a straight line connecting the vertices whose positions are recognized by the vertex recognition function.
  • the feature amount calculation function is an area calculation function for calculating the area of the demarcated region
  • the feature amount comparison function is calculated by the area calculation function.
  • the area comparison function of comparing the area of the defined area with a reference area known as the area of the area corresponding to the defined area may be used.

Abstract

This beauty promotion device is provided with: a vertex recognition unit that recognizes, from imaging data in which the face of a user is captured, the respective positions of at least one fixed point which is identified depending on the facial skeletal structure and at least one movable point which is identified depending on the muscle and fat of the face; a geometric shape definition unit that defines a geometric shape having a periphery that includes the vertexes, the positions thereof having been recognized by the vertex recognition unit; a feature amount calculation unit that calculates a feature amount indicating the length or surface area of the geometric shape; and a feature amount comparison unit that compares the feature amount calculated by the feature amount calculation unit and an existing reference value serving as a value corresponding to the geometric shape.

Description

美容促進装置、美容促進システム、美容促進方法、および美容促進プログラムBeauty promotion device, beauty promotion system, beauty promotion method, and beauty promotion program
 本発明は、美容促進装置、美容促進システム、美容促進方法、および美容促進プログラムに関する。 The present invention relates to a beauty promotion device, a beauty promotion system, a beauty promotion method, and a beauty promotion program.
 従来、画像処理技術を用いて、ユーザに対して美容を促進するシステムが知られている。
 例えば、特許文献1には、ユーザを撮影した撮影データから、ユーザの肌または頭皮の状態を評価する美容促進システムが開示されている。
2. Description of the Related Art Conventionally, a system that uses image processing technology to promote beauty to a user is known.
For example, Patent Document 1 discloses a beauty promotion system that evaluates the condition of the user's skin or scalp from the imaged data of the user.
特開2018-97899号公報Japanese Patent Laid-Open No. 2018-97899
 しかしながら従来の美容促進システムでは、ユーザの肌または頭皮の状態として、局所的な状態を評価しており、経年劣化による顔のプロポーションの変化を評価していなかった。このため、顔のプロポーションを定量的に評価して、美容促進に資することに改善の余地があった。 However, the conventional beauty promotion system evaluates the local condition as the condition of the user's skin or scalp, and does not evaluate the change in the proportion of the face due to deterioration over time. Therefore, there is room for improvement in quantitatively evaluating the proportions of the face and contributing to promoting beauty.
 そこで本発明は、経年劣化および改善対策による顔のプロポーションの変化を定量的に評価して、美容促進に資することができる美容促進装置を提供することを目的とする。 Therefore, an object of the present invention is to provide a beauty promotion device that can quantitatively evaluate changes in facial proportions due to aging deterioration and improvement measures, and contribute to beauty promotion.
 上記課題を解決するために、本発明に係る美容促進装置は、ユーザの顔を撮像した撮像データから、顔の骨格に依存して特定される少なくとも1つの固定点、および顔の筋肉および脂肪に依存して特定される少なくとも1つの可動点それぞれの位置を認識する頂点認識部と、頂点認識部が位置を認識した各頂点同士を周縁部に含む幾何形状を画定する幾何形状画定部と、幾何形状の長さ又は面積を示す特徴量を算出する特徴量算出部と、特徴量算出部が算出した特徴量と、幾何形状と対応する値として既知の基準値と、を比較する特徴量比較部と、を備えている。 In order to solve the above problems, the beauty promotion device according to the present invention determines at least one fixed point, which is specified depending on the skeleton of the face, and facial muscles and fat from the imaged data of the face of the user. A vertex recognition unit that recognizes the position of each of the at least one movable point that is specified in a dependent manner; a geometric shape definition unit that defines a geometric shape that includes the vertices whose positions are recognized by the vertex recognition unit in the peripheral portion; A feature amount calculation unit that calculates a feature amount indicating the length or area of the shape, a feature amount calculated by the feature amount calculation unit, and a reference value known as a value corresponding to the geometric shape. And are equipped with.
 また、本発明に係る美容促進装置は、ユーザの顔を撮像した撮像データから、顔の骨格に依存して特定される2つの固定点、および顔の筋肉および脂肪に依存して特定される1つの可動点それぞれの位置を認識する頂点認識部と、頂点認識部が位置を認識した各頂点同士を結んだ直線により、三角形状をなす画定領域を画定する領域画定部と、画定領域の面積を算出する面積算出部と、面積算出部が算出した画定領域の面積と、画定領域と対応する領域の面積として既知の基準面積と、を比較する面積比較部と、を備えている。 In addition, the beauty promotion device according to the present invention specifies two fixed points that are specified depending on the skeleton of the face and the muscles and fat of the face, which are specified from the imaged data of the user's face. The apex recognition unit that recognizes the position of each of the two movable points, and the area demarcation unit that demarcates a demarcated area in a triangular shape by the line that connects the vertices whose positions are recognized by the apex recognition unit An area calculation unit that calculates the area, and an area comparison unit that compares the area of the demarcated region calculated by the area calculation unit with a reference area known as the area of the region corresponding to the demarcated region.
 また、面積比較部により、画定領域の面積と、基準面積と、を比較した比較結果を示す情報を出力する表示処理部を備えてもよい。 Further, the area comparison unit may include a display processing unit that outputs information indicating a comparison result obtained by comparing the area of the demarcated region and the reference area.
 また、領域画定部は、顔の正中線を基準にして、左右一対の画定領域を画定してもよい。 Also, the area demarcation unit may demarcate a pair of left and right demarcation areas based on the midline of the face.
 また、頂点認識部は、2つの固定点として、深鼻点とこめかみの頂点とにより特定される各頂点を認識し、1つの可動点として、頬上の頂点を認識してもよい。 Also, the vertex recognition unit may recognize each vertex specified by the deep nose point and the temple vertex as two fixed points, and may recognize the vertex on the cheek as one movable point.
 また、頂点認識部は、撮像データを3次元的に評価して、顔の鼻根部のうち、最も窪んだ部分を深鼻点として認識し、顔のこめかみ部分のうち、最も窪んだ部分をこめかみの頂点として認識し、顔の頬の上部のうち、瞳の外側の垂直線上付近において最も隆起した部分を頬上の頂点として認識してもよい。 Further, the vertex recognition unit evaluates the imaged data three-dimensionally, recognizes the most recessed part of the nose root of the face as a deep nose point, and the most recessed part of the temple part of the face. May be recognized as the top of the cheek of the face, and the highest part of the upper part of the cheek of the face near the vertical line outside the pupil may be recognized as the top of the cheek.
 また、領域画定部は、顔の上下方向に間隔をあけて、2種類の画定領域を画定してもよい。 The area demarcation unit may demarcate two types of demarcated areas at intervals in the vertical direction of the face.
 また、頂点認識部は、2つの固定点として、鼻下点と耳下点とにより特定される各頂点を認識し、1つの可動点として、頬下の頂点を認識してもよい。 Further, the vertex recognition unit may recognize each vertex specified by the nose point and the ear point as two fixed points, and the cheek point as one movable point.
 また、頂点認識部は、撮像データを3次元的に評価して、顔の鼻下部のうち、最も窪んだ部分を鼻下点として認識し、顔のうち、耳の下に位置する部分のうち、最も窪んだ部分を耳下点として認識し、顔の頬の下部のうち、瞳の外側の垂直線上付近における、口角の横の膨らみにおいて、最も隆起した部分を頬下の頂点として認識してもよい。 In addition, the vertex recognition unit three-dimensionally evaluates the imaged data, recognizes the most recessed part of the lower nose of the face as the subnasal point, and selects the part of the face located below the ear. , The most recessed part is recognized as the inferior ear point, and in the lower part of the cheek of the face, near the vertical line outside the pupil, in the lateral bulge of the corner of the mouth, the most raised part is recognized as the apex of the lower cheek. Good.
 また、面積比較部は、基準面積として、撮像データの撮像時から一定期間前の時点でのユーザにおける画定領域の面積を用いてもよい。 Also, the area comparison unit may use, as the reference area, the area of the demarcated region in the user at a time point before a certain period from the time when the imaging data was captured.
 また、また、面積比較部は、基準面積として、ユーザが望む顔としての理想モデルにおける画定領域の面積を用いてもよい。 Also, the area comparison unit may use the area of the demarcated area in the ideal model as the face desired by the user as the reference area.
 また、上記課題を解決するために、本発明に係る美容促進システムは、前述したいずれかの美容促進装置と、ユーザの顔を撮像する撮像部、および撮像部が撮像した撮像データを美容促進装置に送信する送信部を備えた撮像装置と、を備え、美容促進装置は、さらに前記撮像データを受信する受信部を備える。 In order to solve the above-mentioned problem, a beauty promotion system according to the present invention provides a beauty promotion device according to any one of the above-described beauty promotion devices, an imaging unit that captures a user's face, and imaging data captured by the imaging unit. The imaging device further includes a transmitting unit that transmits the image capturing data, and the beauty promotion device further includes a receiving unit that receives the image capturing data.
 また、美容促進システムは、内部に前記撮像部が内蔵されるとともに、表示面が鏡面加工された装置側表示部を備え、前記装置側表示部が、前記表示面に向かい合うユーザの顔を撮像した撮像データを、前記表示面に表示可能なスマートミラーであってもよい。 In addition, the beauty promotion system includes the device-side display unit in which the image pickup unit is built-in and the display surface is mirror-finished, and the device-side display unit images the face of the user facing the display surface. It may be a smart mirror capable of displaying image pickup data on the display surface.
 また、上記課題を解決するために、本発明に係る美容促進方法は、ユーザの顔を撮像した撮像データから、顔の骨格に依存して特定される2つの固定点、および顔の筋肉および脂肪に依存して特定される1つの可動点それぞれの位置を認識する頂点認識ステップと、頂点認識ステップにより位置が認識された各頂点同士を結んだ直線により、三角形状をなす画定領域を画定する領域画定ステップと、画定領域の面積を算出する面積算出ステップと、面積算出ステップにより算出した画定領域の面積と、画定領域と対応する領域の面積として既知の基準面積と、を比較する面積比較ステップと、を備えている。 Further, in order to solve the above-mentioned problems, the beauty promotion method according to the present invention provides two fixed points, which are specified depending on the skeleton of the face, from the imaging data of the user's face, and muscles and fat of the face. Area defining a triangular demarcated area by a vertex recognition step for recognizing the position of each one of the movable points specified depending on the above, and a straight line connecting the vertices whose positions are recognized by the vertex recognition step. A demarcation step, an area calculation step of calculating the area of the demarcation area, an area comparison step of comparing the area of the demarcation area calculated by the area calculation step, and a reference area known as the area of the area corresponding to the demarcation area. , Are provided.
 また、上記課題を解決するために、本発明に係る美容促進プログラムは、コンピュータに、ユーザの顔を撮像した撮像データから、顔の骨格に依存して特定される2つの固定点、および顔の筋肉および脂肪に依存して特定される1つの可動点それぞれの位置を認識する頂点認識機能と、頂点認識機能により位置が認識された各頂点同士を結んだ直線により、三角形状をなす画定領域を画定する領域画定機能と、画定領域の面積を算出する面積算出機能と、面積算出機能により算出した画定領域の面積と、画定領域と対応する領域の面積として既知の基準面積と、を比較する面積比較機能と、を実現させる。 Further, in order to solve the above-mentioned problems, a beauty promotion program according to the present invention causes a computer to detect two fixed points, which are specified depending on the skeleton of the face, from the imaged data of the face of the user and the face. A triangular delimited area is defined by a vertex recognition function that recognizes the position of each movable point that is specified depending on muscle and fat, and a straight line that connects the vertices whose positions are recognized by the vertex recognition function. An area that compares the demarcating area demarcation function, the area calculating function that calculates the area of the demarcating area, the area of the demarcating area calculated by the area calculating function, and a reference area known as the area of the area corresponding to the demarcating area. And the comparison function.
 本発明の美容促進装置では、頂点認識部が、ユーザの顔を撮像した撮像データから、顔の骨格に依存して特定される2つの固定点、および顔の筋肉および脂肪に依存して特定される1つの可動点それぞれの位置を認識する。
 次に、領域画定部が、特定された各頂点を結んだ直線により、三角形状をなす画定領域を画定し、面積算出部が画定領域の面積を算出する。そして、面積比較部が、画定領域の面積と基準面積とを比較する。これにより、経年劣化および改善対策による顔のプロポーションの変化を定量的に評価して、美容促進に資することができる。
In the beauty promotion device of the present invention, the apex recognition unit is specified from the image pickup data of the face of the user, two fixed points that are specified depending on the skeleton of the face, and the muscles and fat of the face. The position of each movable point is recognized.
Next, the area demarcation unit demarcates the demarcated area having a triangular shape by the straight line connecting the specified vertices, and the area calculation unit calculates the area of the demarcated area. Then, the area comparison unit compares the area of the demarcated region with the reference area. This makes it possible to quantitatively evaluate changes in the proportions of the face due to aging deterioration and improvement measures, and contribute to promoting beauty.
本発明の第1実施形態に係る美容促進システムの構成例を示す模式図である。It is a schematic diagram which shows the structural example of the beauty promotion system which concerns on 1st Embodiment of this invention. 図1に示す美容促進装置の構成例を示すブロック図である。It is a block diagram which shows the structural example of the beauty promotion apparatus shown in FIG. 図2に示す頂点認識部によって認識する各頂点の一例を示す図であって、撮像データの(a)正面図、(b)側面図である。FIG. 3 is a diagram showing an example of each vertex recognized by the vertex recognition unit shown in FIG. 2, and is a (a) front view and (b) side view of imaging data. 図1に示す携帯端末の構成例を示すブロック図である。It is a block diagram which shows the structural example of the portable terminal shown in FIG. 美容促進装置における処理フローを示す図である。It is a figure which shows the processing flow in a beauty promotion apparatus. 頂点認識部が、頬上の頂点を認識する処理における模式図である。It is a schematic diagram in the process which the vertex recognition part recognizes the vertex on the cheek. 表示処理部による表示内容の一例を示す図である。It is a figure which shows an example of the display content by a display processing part. 表示処理部による表示内容の他の例を示す図である。It is a figure which shows the other example of the display content by the display processing part. 変形例に係る美容促進装置の構成例を示すブロック図である。It is a block diagram which shows the structural example of the beauty promotion apparatus which concerns on a modification. 変形例1において、頂点認識部によって認識する各頂点の一例を示す図であって、撮像データの(a)正面図、(b)側面図である。In modification 1, it is a figure which shows an example of each vertex recognized by the vertex recognition part, and is a (a) front view and (b) side view of imaging data. 変形例に係る美容促進装置における処理フローを示す図である。It is a figure which shows the processing flow in the beauty promotion apparatus which concerns on a modification. 変形例2において、頂点認識部によって認識する各頂点の一例を示す図であって、撮像データの(a)正面図、(b)側面図である。In modification 2, it is a figure which shows an example of each vertex recognized by the vertex recognition part, and is a (a) front view and (b) side view of imaging data. 変形例3において、頂点認識部によって認識する各頂点の一例を示す図であって、撮像データの(a)正面図、(b)側面図である。In modification 3, it is a figure which shows an example of each vertex recognized by the vertex recognition part, and is a (a) front view and (b) side view of imaging data. 本発明の第2実施形態に係る美容促進システムの使用状態を説明する図である。It is a figure explaining the use condition of the beauty promotion system concerning a 2nd embodiment of the present invention. 第2実施形態に係る美容促進装置の構成例を示すブロック図である。It is a block diagram which shows the structural example of the beauty promotion apparatus which concerns on 2nd Embodiment.
(第1実施形態)
 本発明の第1実施形態に係る美容促進システム1ついて、図1から図8を参照しながら説明する。
 図1は、本発明の一実施形態に係る美容促進システム1の構成例を示す模式図である。美容促進システム1は、ユーザ50の美容促進に資するために、ユーザ50の顔を撮像した撮像データに対してデータ処理を行い、ユーザ50の顔の状態を定量的に評価するエイジングマネジメント向け(経年劣化の管理と対策)の美容促進システムである。
(First embodiment)
The beauty promotion system 1 according to the first embodiment of the present invention will be described with reference to FIGS. 1 to 8.
FIG. 1 is a schematic diagram showing a configuration example of a beauty promotion system 1 according to an embodiment of the present invention. In order to contribute to the beauty promotion of the user 50, the beauty promotion system 1 performs data processing on the imaging data of the face of the user 50 and quantitatively evaluates the state of the face of the user 50 (for aging management). This is a beauty promotion system for deterioration management and countermeasures.
 図1に示すように、美容促進システム1は、ネットワーク20を介して互いに接続された撮像装置10、および美容促進装置30を備えている。また、図示の例では、ネットワーク20にユーザ50の携帯端末40が接続されている。ユーザ50の携帯端末40は、ネットワーク20に接続されていなくてもよい。 As shown in FIG. 1, the beauty promotion system 1 includes an imaging device 10 and a beauty promotion device 30 that are connected to each other via a network 20. In the illustrated example, the mobile terminal 40 of the user 50 is connected to the network 20. The mobile terminal 40 of the user 50 does not have to be connected to the network 20.
 ここで、本発明の美容促進システム1は、エステサロン等の美容関連サービスを提供する店舗において、ユーザ50の顔を撮像して定量的に評価した結果を示し、例えばユーザ50が自身の顔の状態を維持、改善するために今後取り組むべき施策を提案するために用いられる。
 なお、使用に際しては、図1に示すように、店舗のオペレータ60の操作のもとで行われてもよいし、ユーザ50自身が操作を行ってもよい。
Here, the beauty promotion system 1 of the present invention shows the results of quantitatively evaluating the face of the user 50 by imaging the face of the user 50 in a store that provides beauty-related services such as a beauty salon. It is used to propose measures to be taken in the future to maintain and improve the condition.
In use, as shown in FIG. 1, the operation may be performed under the operation of the operator 60 of the store, or may be performed by the user 50 itself.
 撮像装置10は、ユーザ50の顔を撮像して撮像データを取得できる装置であれば、特に限定されない。撮像装置10は、例えばCMOSやCCD等の撮像素子を有する撮像部を備えている。また、撮像装置10は、撮像部が撮像した撮像データを美容促進装置に送信する送信部を備えている。 The image capturing device 10 is not particularly limited as long as it is a device that can capture the captured data by capturing the face of the user 50. The image pickup device 10 includes an image pickup unit having an image pickup element such as a CMOS or a CCD. The imaging device 10 also includes a transmission unit that transmits the imaging data captured by the imaging unit to the beauty promotion device.
 撮像装置10が取得する撮像データは、2Dデータであっても3Dデータであってもよい。本実施形態では撮像部が3Dデータとしての撮像データを取得する構成について説明する。
 すなわち、撮像装置10は、図1では簡略化して図示しているが、例えば撮像部が間隔をあけて複数配置された3Dカメラであってもよいし、1つの撮像部と距離センサとを備えている構成であってもよい。
The imaging data acquired by the imaging device 10 may be 2D data or 3D data. In the present embodiment, a configuration in which the image capturing unit acquires image capturing data as 3D data will be described.
That is, although the image pickup apparatus 10 is illustrated in a simplified manner in FIG. 1, it may be, for example, a 3D camera in which a plurality of image pickup units are arranged at intervals, or one image pickup unit and a distance sensor are provided. The configuration may be changed.
 ネットワーク20は、撮像装置10、美容促進装置30、および携帯端末40の間を相互に接続させるためのネットワークであり、例えば、無線ネットワークや有線ネットワークである。
 具体的には、ネットワーク20は、ワイヤレスLAN(wireless LAN:WLAN)や広域ネットワーク(wide area network:WAN)、ISDNs(integrated service digital networks)、無線LANs、LTE(long term evolution)、LTE-Advanced、第4世代(4G)、第5世代(5G)、CDMA(code division multiple access)、WCDMA(登録商標)、イーサネット(登録商標)などである。
The network 20 is a network for mutually connecting the imaging device 10, the beauty promotion device 30, and the mobile terminal 40, and is, for example, a wireless network or a wired network.
Specifically, the network 20 includes a wireless LAN (WLAN), a wide area network (WAN), ISDNs (integrated service digital networks), wireless LANs, LTE (long term evolution, LED), and LTE (long term evolution-Led-ed). The fourth generation (4G), fifth generation (5G), CDMA (code division multiple access), WCDMA (registered trademark), Ethernet (registered trademark), and the like.
 また、ネットワーク20は、これらの例に限られず、例えば、公衆交換電話網(Public Switched Telephone Network:PSTN)やブルートゥース(Bluetooth(登録商標))、ブルートゥースローエナジー(Bluetooth Low Energy)、光回線、ADSL(Asymmetric Digital Subscriber Line)回線、衛星通信網などであってもよく、どのようなネットワークであってもよい。 Further, the network 20 is not limited to these examples. For example, the public switched telephone network (Public Switched Telephone network: PSTN), Bluetooth (registered trademark), Bluetooth Low Energy, optical line, ADSL. It may be (Asymmetric Digital Subscriber Line) line, satellite communication network, or the like, and may be any network.
 また、ネットワーク20は、例えば、NB-IoT(Narrow Band IoT)や、eMTC(enhanced Machine Type Communication)であってもよい。なお、NB-IoTやeMTCは、IoT向けの無線通信方式であり、低コスト、低消費電力で長距離通信が可能なネットワークである。 Also, the network 20 may be, for example, NB-IoT (Narrow Band IoT) or eMTC (enhanced Machine Type Communication). Note that NB-IoT and eMTC are wireless communication systems for IoT, and are networks capable of long-distance communication with low cost and low power consumption.
 また、ネットワーク20は、これらの組み合わせであってもよい。また、ネットワーク20は、これらの例を組み合わせた複数の異なるネットワークを含むものであってもよい。例えば、ネットワーク20は、LTEによる無線ネットワークと、閉域網であるイントラネットなどの有線ネットワークとを含むものであってもよい。 Also, the network 20 may be a combination of these. Further, the network 20 may include a plurality of different networks combining these examples. For example, the network 20 may include an LTE wireless network and a wired network such as an intranet that is a closed network.
 次に、図2を用いて、美容促進装置30の構成について説明する。図2は、美容促進装置30の構成例を示すブロック図である。
 美容促進装置30は、装置側通信部31、データ記憶部32、装置処理部33、および装置側表示部34を備えている。美容促進装置30は、ユーザ50の顔を撮像した撮像データからユーザ50の顔の状態を解析する情報処理装置であって、一例として、本実施形態では、パソコンが用いられる。
Next, the configuration of the beauty promotion device 30 will be described with reference to FIG. FIG. 2 is a block diagram showing a configuration example of the beauty promotion device 30.
The beauty promotion device 30 includes a device-side communication unit 31, a data storage unit 32, a device processing unit 33, and a device-side display unit 34. The beauty promotion device 30 is an information processing device that analyzes the state of the face of the user 50 from the imaged data of the face of the user 50. As an example, a personal computer is used in the present embodiment.
 装置側通信部31は、ネットワーク20を介して、各種のデータを送受信する通信インターフェースである。各種のデータとして、撮像データ、比較結果を示すデータが含まれる。すなわち装置側通信部31は、撮像装置10の送信部から送信された撮像データを受信する受信部として機能する。 The device-side communication unit 31 is a communication interface that transmits and receives various data via the network 20. Various types of data include image pickup data and data indicating a comparison result. That is, the device-side communication unit 31 functions as a reception unit that receives the imaging data transmitted from the transmission unit of the imaging device 10.
 データ記憶部32は、装置処理部33が動作するうえで必要とする各種の制御プログラムや、装置側通信部31が外部から受信した各種のデータを記憶する機能を有する。また、データ記憶部32は、少なくとも一つ以上の基準面積データを記憶している。
 データ記憶部32は、例えば、HDD、SSD、フラッシュメモリなど各種の記憶媒体により実現される。
The data storage unit 32 has a function of storing various control programs necessary for the device processing unit 33 to operate and various data received by the device-side communication unit 31 from the outside. The data storage unit 32 also stores at least one or more reference area data.
The data storage unit 32 is realized by various storage media such as HDD, SSD, and flash memory.
 データ記憶部32に記憶された制御プログラムを実行することで、装置処理部33が、美容促進システム1として実現すべき各機能を実現する。ここでいう各機能とは、頂点認識機能、領域画定機能、面積算出機能、面積比較機能、および結果表示機能を含んでいる。
 装置側表示部34は、美容促進装置30の操作の内容や処理の結果を表示するモニタ装置である。
By executing the control program stored in the data storage unit 32, the device processing unit 33 realizes each function to be realized as the beauty promotion system 1. The functions referred to here include a vertex recognition function, a region demarcation function, an area calculation function, an area comparison function, and a result display function.
The device-side display unit 34 is a monitor device that displays the content of the operation of the beauty promotion device 30 and the result of the processing.
 装置処理部33は、美容促進装置30の各部を制御するコンピュータであり、例えば、中央処理装置(CPU)やマイクロプロセッサ、ASIC、FPGAなどであってもよい。
 なお、装置処理部33は、これらの例に限られず、美容促進装置30の各部を制御するコンピュータであれば、どのようなものであってもよい。
The device processing unit 33 is a computer that controls each unit of the beauty promotion device 30, and may be, for example, a central processing unit (CPU), a microprocessor, an ASIC, an FPGA, or the like.
The device processing unit 33 is not limited to these examples, and may be any device as long as it is a computer that controls each unit of the beauty promotion device 30.
 そして装置処理部33は、頂点認識部33A、領域画定部33B、面積算出部33C、面積比較部33D、および表示処理部33Eを備えている。
 頂点認識部33Aは、ユーザ50の顔を撮像した撮像データから、2つの固定点Pfおよび1つの可動点Pmそれぞれの位置を認識する。
The device processing unit 33 includes a vertex recognition unit 33A, a region demarcation unit 33B, an area calculation unit 33C, an area comparison unit 33D, and a display processing unit 33E.
The vertex recognition unit 33A recognizes the positions of the two fixed points Pf and the one movable point Pm from the imaged data of the face of the user 50.
 ここで固定点Pfとは、顔の骨格に依存して特定される頂点である。固定点Pfは、顔の骨格に依存して特定されるため、時間の経過による位置の変化はわずかである。
 なお、ここでいう固定という意味は、位置がまったく変化しないという意味ではなく、後述する可動点Pmと比較して、変化の量が極めて少ないという意味である。
Here, the fixed point Pf is a vertex specified depending on the skeleton of the face. Since the fixed point Pf is specified depending on the skeleton of the face, the change in position over time is slight.
Note that the meaning of "fixed" here does not mean that the position does not change at all, but means that the amount of change is extremely small as compared with the movable point Pm described later.
 一方、可動点Pmとは、顔の筋肉および脂肪に依存して特定される頂点であり、例えば加齢とともに顔の筋肉が弱くなったり、顔に脂肪がついたりすることで、下側に向けて位置が変化する。
 また可動点Pmは、顔の筋肉に刺激を与えることで、顔の筋肉が強くなったり、顔の脂肪量が少なくなったりすることで、上側に向けて位置が変化する。このような可動点Pmの位置の変化により顔のプロポーションが変化して、顔が相手に与える印象に大きく左右する。
On the other hand, the movable point Pm is an apex that is specified depending on the muscles and fat of the face. For example, the muscles of the face become weaker with age or fat is attached to the face. Position changes.
Further, the position of the movable point Pm is changed to the upper side by stimulating the muscles of the face to strengthen the muscles of the face or reduce the amount of fat on the face. Due to such a change in the position of the movable point Pm, the proportion of the face changes, which greatly affects the impression the face gives to the opponent.
 ここで、本実施形態における頂点認識部33Aが認識する各頂点について、図3を参照して説明する。
 図3は、頂点認識部33Aによって認識する各頂点を示す図であって、撮像データの(a)正面図、(b)側面図である。なお、この内容はあくまで一例であり、頂点認識部33Aが認識する各頂点は、任意に変更することができる。すなわち、ユーザ50の骨格の構造や、筋肉の付き方等を考慮して、認識しやすい顔の頂点を、評価に用いることができる。
Here, each vertex recognized by the vertex recognition unit 33A in the present embodiment will be described with reference to FIG.
3A and 3B are diagrams showing the respective vertices recognized by the vertex recognition unit 33A, and are a front view and a side view of the imaged data. Note that this content is merely an example, and each vertex recognized by the vertex recognition unit 33A can be arbitrarily changed. That is, in consideration of the structure of the skeleton of the user 50, how to attach muscles, and the like, the vertex of the face that is easy to recognize can be used for evaluation.
 図3に示すように、頂点認識部33Aは、1つの画定領域に対して、2つの固定点Pfと1つの可動点Pmを認識する。2つの固定点Pfとして、深鼻点P1とこめかみの頂点P2とにより特定される各頂点を認識し、1つの可動点Pmとして、頬上の頂点P3を認識する。なお、深鼻点P1は左右一対の画定領域で共有している。各頂点の具体的な特定手法については後述する。
 本実施形態では、深鼻点P1およびこめかみの頂点P2それぞれの上下方向の位置は、互いに同等となっている。頬上の頂点P3は、深鼻点P1およびこめかみの頂点P2のよりも下側に位置している。
As shown in FIG. 3, the vertex recognition unit 33A recognizes two fixed points Pf and one movable point Pm for one demarcated area. As the two fixed points Pf, the vertices identified by the deep nose point P1 and the temple apex P2 are recognized, and the vertex P3 on the cheek is recognized as one movable point Pm. The deep nose point P1 is shared by the pair of left and right demarcated regions. A specific method of identifying each vertex will be described later.
In the present embodiment, the positions of the deep nose point P1 and the temple apex P2 in the vertical direction are equal to each other. The apex P3 on the cheek is located below the deep nose P1 and the apex P2 of the temple.
 また、頂点認識部33Aは、2つの固定点Pfとして、鼻下点P4と耳下点P5とにより特定される各頂点を認識し、1つの可動点Pmとして、頬下の頂点P6を認識する。なお、鼻下点P4は左右一対の画定領域で共有している。各頂点の具体的な判別手法については後述する。
 本実施形態では、鼻下点P4および耳下点P5それぞれの上下方向の位置は、互いに同等となっている。頬下の頂点P6は、鼻下点P4および耳下点P5のよりも下側に位置している。
In addition, the vertex recognition unit 33A recognizes each vertex specified by the inferior nose point P4 and the inferior ear point P5 as the two fixed points Pf, and recognizes the inferior cheek vertex P6 as one movable point Pm. .. The nose point P4 is shared by a pair of left and right demarcated areas. A specific method for discriminating each vertex will be described later.
In the present embodiment, the vertical position of the inferior point P4 and the inferior point P5 are equal to each other. The lower cheek apex P6 is located below the lower nose P4 and the lower ear P5.
 頂点認識部33Aにおける各頂点の認識手段としては、撮像データに対して設けられた空間座標に対する絶対座標を特定する方法であってもよいし、画定領域を画定する3つの各頂点のうちのいずれかを基準とした相対座標を特定する方法であってもよい。
 本実施形態では、撮像データが3Dデータであるため、座標値も3次元的に表現されることとなる。
As the means for recognizing each vertex in the vertex recognition unit 33A, a method of specifying absolute coordinates with respect to the spatial coordinates provided for the imaged data may be used, or any one of the three vertices defining the demarcated area may be used. Alternatively, a method of specifying relative coordinates based on the above may be used.
In the present embodiment, since the image pickup data is 3D data, the coordinate values are also three-dimensionally expressed.
 領域画定部33Bは、頂点認識部33Aが位置を認識した各頂点同士を結んだ直線により、三角形状をなす画定領域を画定する。また、領域画定部33Bは、顔の正中線O1を基準にして、左右一対の画定領域を画定する。
 なお、領域画定部33Bが画定する画定領域は、2次元的な領域でもよいし、3次元的な領域であってもよい。本実施形態では画定領域は、3次元的な領域となっている。
The area demarcation unit 33B demarcates a demarcation area having a triangular shape by a straight line connecting the vertices whose positions are recognized by the vertex recognition unit 33A. Further, the area demarcation unit 33B demarcates a pair of left and right demarcation areas with the midline O1 of the face as a reference.
The demarcated area defined by the area demarcation unit 33B may be a two-dimensional area or a three-dimensional area. In this embodiment, the defined area is a three-dimensional area.
 本実施形態では、領域画定部33Bは、顔の上下方向に間隔をあけて、2種類の画定領域を画定する。ここで、上側に位置する画定領域を上側画定領域A1とし、下側に位置する画定領域を下側画定領域A2とする。
 すなわち、領域画定部33Bは、上側画定領域A1および下側画定領域A2それぞれを、左右一対画定することとなる。
In the present embodiment, the area demarcation unit 33B demarcates two types of demarcated areas at intervals in the vertical direction of the face. Here, the demarcation region located on the upper side is referred to as the upper demarcation region A1, and the demarcation region located on the lower side is referred to as the lower demarcation region A2.
That is, the area defining portion 33B defines a pair of left and right upper defining areas A1 and lower defining areas A2.
 なお、上側画定領域A1および下側画定領域A2が上下方向に間隔をあけるとは、顔全体を、上下方向の全域にわたって、上側画定領域A1および下側画定領域A2により評価の精度を上げるという意図である。このため、上側画定領域A1および下側画定領域A2の一部同士が、互いに重なっていても問題はない。 Note that the upper demarcating area A1 and the lower demarcating area A2 are vertically spaced to mean that the accuracy of evaluation is increased by the upper demarcating area A1 and the lower demarcating area A2 over the entire face in the up and down direction. Is. Therefore, there is no problem even if a part of the upper demarcation area A1 and the lower demarcation area A2 overlap each other.
 面積算出部33Cは、画定領域の面積を算出する。画定領域の面積の算出にあたっては、領域画定部33Bが特定した各頂点の座標データを用いて、画定領域内の面積を算出する。
 面積比較部33Dは、面積算出部33Cが算出した画定領域の面積と、画定領域と対応する領域の面積として既知の基準面積と、を比較する。
The area calculator 33C calculates the area of the demarcated region. When calculating the area of the demarcated region, the area within the demarcated region is calculated using the coordinate data of each vertex specified by the region demarcating unit 33B.
The area comparison unit 33D compares the area of the defined area calculated by the area calculation unit 33C with a reference area known as the area of the area corresponding to the defined area.
 面積比較部33Dは、例えば基準面積として、撮像データの撮像時から一定期間前、すなわち過去に撮像したユーザ50の撮像データから画定した画定領域の面積を用いることができる。
 また、面積比較部33Dは、基準面積として、ユーザ50が望む顔としての理想モデルにおける画定領域の面積を用いることができる。このように基準面積としては、現時点での画定領域の面積と比較できるものであれば、任意に設定することができる。
The area comparison unit 33D can use, for example, as the reference area, the area of the demarcated region defined from the imaged data of the user 50 imaged a certain period before the imaged data is captured, that is, in the past.
Further, the area comparison unit 33D can use the area of the demarcated region in the ideal model as the face desired by the user 50 as the reference area. As described above, the reference area can be arbitrarily set as long as it can be compared with the area of the demarcated region at the present time.
 ここで、ユーザ50が望む顔としての理想モデルの作成方法の一例について説明する。
理想モデルは、過去の撮像データを用いて作成する。過去の撮像データに対して、理想となる画定領域を目視で指定した元データを100個ほど準備する。この元データを用い、ディープラーニング(深層学習)処理を行うことで、理想モデルを作成することができる。
Here, an example of a method of creating an ideal model as a face desired by the user 50 will be described.
The ideal model is created using past imaging data. About 100 pieces of original data in which an ideal demarcated area is visually specified are prepared with respect to past imaging data. An ideal model can be created by performing a deep learning process using this original data.
 次に、各面積を比較する際の指針について、前回測定した際の面積を基準面積とした場合を例に挙げて説明する。本実施形態の上側画定領域A1では、可動点Pmである頬上の頂点P3が、固定点Pfである深鼻点P1およびこめかみの頂点P2よりも下側に位置している。
 また、下側画定領域A2においても、可動点Pmである頬下の頂点P6が、鼻下点P4および耳下点P5よりも下側に位置している。
Next, a guideline for comparing the areas will be described with reference to the case where the previously measured area is used as a reference area. In the upper demarcation area A1 of the present embodiment, the apex P3 on the cheek, which is the movable point Pm, is located below the deep nose point P1 and the apex P2 of the temple, which are the fixed points Pf.
Also in the lower demarcation area A2, the lower cheek apex P6, which is the movable point Pm, is located below the lower nose P4 and the lower ear P5.
 このため、可動点Pmである頬上の頂点P3、および頬下の頂点P6が下側に移動すると、上側画定領域A1および下側画定領域A2はそれぞれ、面積が大きくなることとなる。
 一方、可動点Pmである頬上の頂点P3、および頬下の頂点P6が上側に移動すると、上側画定領域A1および下側画定領域A2はそれぞれ、面積が小さくなることとなる。
Therefore, when the apex P3 on the cheek and the apex P6 under the cheek, which are the movable points Pm, move downward, the area of the upper demarcation area A1 and the area of the lower demarcation area A2 increase.
On the other hand, when the apex P3 on the cheek and the apex P6 under the cheek, which are the movable points Pm, move to the upper side, the areas of the upper demarcation area A1 and the lower demarcation area A2 become smaller.
 すなわち本実施形態のように、可動点Pmの位置を固定点Pfの位置よりも下側に配置している構成では、画定領域の面積が、前回測定した際の面積である基準面積よりも小さくなっている場合には、可動点Pmが上側に移動したこととなる。
 すなわち、顔の筋肉が強くなったか、又は顔の脂肪が少なくなったことで、顔のプロポーションが改善したことを意味する。
That is, in the configuration in which the position of the movable point Pm is arranged below the position of the fixed point Pf as in the present embodiment, the area of the demarcated region is smaller than the reference area which is the area measured last time. If it is, the movable point Pm has moved upward.
That is, it means that the proportion of the face is improved due to the strengthening of the muscles of the face or the decrease in the fat of the face.
 一方、画定領域の面積が、前回測定した際の面積である基準面積よりも大きくなっている場合には、可動点Pmが下側に移動したこととなる。
 すなわち、顔の筋肉が弱くなったか、又は顔の脂肪が多くなったことで、顔のプロポーションが悪化したことを意味する。
 このようにユーザ50は、画定領域の変化量を確認することで、定量的に顔のプロポーションが改善に向かっているのか悪化しているのかを把握することができる。
On the other hand, when the area of the demarcated region is larger than the reference area which is the area measured last time, the movable point Pm has moved to the lower side.
That is, it means that the proportion of the face is deteriorated because the muscles of the face are weakened or the fat of the face is increased.
In this way, the user 50 can quantitatively grasp whether the proportion of the face is improving or deteriorating by confirming the amount of change in the demarcated area.
 なお、本実施形態では、上側画定領域A1および下側画定領域A2のそれぞれについて、可動点Pmの位置を固定点Pfの位置よりも下側に配置している構成について説明したが、このような態様に限られない。可動点Pmの位置は、固定点Pfの位置よりも上側に位置してもよい。 Note that, in the present embodiment, the configuration in which the position of the movable point Pm is arranged below the position of the fixed point Pf in each of the upper demarcated area A1 and the lower demarcated area A2 has been described. It is not limited to the mode. The position of the movable point Pm may be located above the position of the fixed point Pf.
 この場合には、画定領域の面積と基準面積との比較結果が、前述した説明と反対となる。すなわち、画定領域の面積が基準面積よりも大きくなった場合に、顔のプロポーションが改善に向かっていることとなり、画定領域の面積が基準面積よりも小さくなった場合に、顔のプロポーションが悪化に向かっていることとなる。 In this case, the result of comparing the area of the demarcated area with the reference area is the opposite of the above description. In other words, when the area of the demarcated region becomes larger than the reference area, the proportion of the face is heading for improvement, and when the area of the demarcated region becomes smaller than the reference area, the proportion of the face becomes worse. You're heading.
 また、ユーザ50が望む顔としての理想モデルにおける画定領域の面積を、基準面積とする場合には、基準面積にどの程度近づいたかを確認することで、顔のプロポーションが改善しているかどうかを把握することができる。 In addition, when the area of the demarcated region in the ideal model as the face desired by the user 50 is set as the reference area, it is possible to know whether the proportion of the face is improved by checking how close the reference area is. can do.
 表示処理部33Eは、面積比較部33Dにより、画定領域の面積と、基準面積と、を比較した比較結果を装置側表示部34、および携帯端末40の後述する端末側表示部45に表示する。表示処理部33Eが表示する表示内容の具体例については後述する。 The display processing unit 33E causes the area comparison unit 33D to display the comparison result obtained by comparing the area of the demarcated region with the reference area on the device-side display unit 34 and the terminal-side display unit 45 of the mobile terminal 40, which will be described later. A specific example of the display content displayed by the display processing unit 33E will be described later.
 次に、図4を用いて、携帯端末40の構成について説明する。図4は、携帯端末40の構成例を示すブロック図である。
 携帯端末40は、端末側通信部41、端末記憶部42、端末処理部43、カメラ44、および端末側表示部45を備えている。携帯端末40は、いわゆるスマートフォンやタブレット等のユーザが携帯して使用する端末装置である。
Next, the configuration of the mobile terminal 40 will be described with reference to FIG. FIG. 4 is a block diagram showing a configuration example of the mobile terminal 40.
The mobile terminal 40 includes a terminal side communication unit 41, a terminal storage unit 42, a terminal processing unit 43, a camera 44, and a terminal side display unit 45. The mobile terminal 40 is a terminal device that a user carries and uses, such as a so-called smartphone or tablet.
 端末側通信部41は、ネットワーク20を介して、各種のデータを送受信する通信インターフェースである。各種のデータとして、撮像データ、比較結果を示すデータが含まれる。すなわち、端末側通信部41は、各種の情報を美容促進装置30から受信する。 The terminal-side communication unit 41 is a communication interface that transmits and receives various data via the network 20. Various types of data include image pickup data and data indicating a comparison result. That is, the terminal-side communication unit 41 receives various types of information from the beauty promotion device 30.
 端末記憶部42は、端末処理部43が動作するうえで必要とする各種の制御プログラムや各種データを記憶する機能を有する。端末記憶部42は、例えば、HDD、SSD、フラッシュメモリなど各種の記憶媒体により実現される。
 端末記憶部42に記憶された制御プログラムを実行することで、端末処理部43が、美容促進システム1として実現すべき各機能のうちの少なくとも一部を実現してもよい。
The terminal storage unit 42 has a function of storing various control programs and various data necessary for the terminal processing unit 43 to operate. The terminal storage unit 42 is realized by various storage media such as HDD, SSD, and flash memory.
By executing the control program stored in the terminal storage unit 42, the terminal processing unit 43 may realize at least a part of each function to be realized as the beauty promotion system 1.
 端末処理部43は、携帯端末40の各部を制御するコンピュータであり、例えば、中央処理装置(CPU)やマイクロプロセッサ、ASIC、FPGAなどであってもよい。なお、端末処理部43は、これらの例に限られず、携帯端末40の各部を制御するコンピュータであれば、どのようなものであってもよい。 The terminal processing unit 43 is a computer that controls each unit of the mobile terminal 40, and may be, for example, a central processing unit (CPU), a microprocessor, an ASIC, an FPGA, or the like. The terminal processing unit 43 is not limited to these examples, and may be any computer as long as it controls each unit of the mobile terminal 40.
 端末処理部43は、受付部43Aを備えている。受付部43Aは、美容促進装置30から送信されてきた撮像データや比較結果を受付けて、端末側表示部45に表示する。
 カメラ44は、ユーザ50の操作により、撮像を行うことができる。本実施形態に係る撮像装置10に代えて、携帯端末40のカメラ44により、撮像データを取得して、美容促進装置30に送信してもよい。
 端末側表示部45は、美容促進装置30により処理された比較結果を示す情報を表示するモニタ装置である。端末側表示部45は、比較結果とともに、撮像データを表示することができる。
The terminal processing unit 43 includes a reception unit 43A. The reception unit 43A receives the imaging data and the comparison result transmitted from the beauty promotion device 30, and displays them on the terminal side display unit 45.
The camera 44 can take an image by the operation of the user 50. Instead of the imaging device 10 according to the present embodiment, the imaging data may be acquired by the camera 44 of the mobile terminal 40 and transmitted to the beauty promotion device 30.
The terminal-side display unit 45 is a monitor device that displays information indicating the comparison result processed by the beauty promotion device 30. The terminal side display unit 45 can display the imaging data together with the comparison result.
 次に、図5から図6を用いて、美容促進システム1の制御フロー、および美容促進システム1における処理の内容について説明する。
 図5は、美容促進システム1における処理フローを示す図であり、図6は、頂点認識部33Aが、頬上の頂点P3を認識する処理における模式図である。
Next, the control flow of the beauty promotion system 1 and the content of processing in the beauty promotion system 1 will be described with reference to FIGS. 5 to 6.
FIG. 5 is a diagram showing a processing flow in the beauty promotion system 1, and FIG. 6 is a schematic diagram in processing in which the vertex recognition unit 33A recognizes the vertex P3 on the cheek.
 図5に示すように、本実施形態に係る美容促進方法では、まず撮像装置10により、ユーザ50の顔を撮像する撮像ステップ(S501)を行う。撮像ステップでは、ユーザ50の顔の表情による変化を抑えるために、例えば奥歯を軽く噛合わせる等をして、常に同じ表情とすることが望ましい。
 本実施形態では、撮像装置10は3Dデータを取得する。3Dデータは、3Dカメラで撮像することで取得することができる。
As shown in FIG. 5, in the beauty promotion method according to the present embodiment, first, the imaging device 10 performs an imaging step (S501) of imaging the face of the user 50. In the imaging step, in order to suppress a change due to the facial expression of the user 50, it is desirable that the back teeth are lightly engaged, for example, so that the facial expression is always the same.
In the present embodiment, the imaging device 10 acquires 3D data. 3D data can be acquired by capturing an image with a 3D camera.
 次に、頂点認識部33Aが、撮像装置10から送信された撮像データを用いて各頂点を認識する頂点認識ステップ(S502)を行う。
 頂点認識ステップでは、1つの画定領域を構成する3つの頂点として、2つの固定点Pfおよび1つの可動点Pmそれぞれの位置を認識する。ここで、各頂点の具体的な判別手法の一態様について説明する。なお、あくまでこの説明は一例であり、他の手法により各頂点を判別してもよい。
Next, 33 A of vertex recognition parts perform the vertex recognition step (S502) which recognizes each vertex using the imaging data transmitted from the imaging device 10.
In the vertex recognition step, the positions of the two fixed points Pf and the one movable point Pm are recognized as three vertices forming one demarcated area. Here, one aspect of a specific method of discriminating each vertex will be described. Note that this description is merely an example, and each vertex may be discriminated by another method.
 図3に示すように、頂点認識部33Aは、撮像データを3次元的に評価して、各頂点を認識する。まず、上側画定領域A1を構成する3つの頂点のうち、一方の固定点Pfをなす深鼻点P1については、顔の鼻根部のうち、最も窪んだ部分を特定し、深鼻点P1として認識する。
 次に、他方の固定点Pfをなすこめかみの頂点P2については、顔のこめかみ部分のうち、最も窪んだ部分をこめかみの頂点P2として認識する。なお、こめかみの頂点P2は、正面視における顔の左右方向の外端部のうち、深鼻点P1と瞳の中心、または目頭とを結ぶ直線が通過する部分としてもよい。
As shown in FIG. 3, the vertex recognition unit 33A three-dimensionally evaluates the imaged data and recognizes each vertex. First, with respect to the deep nose point P1 that forms one fixed point Pf among the three vertices that form the upper demarcation area A1, the most recessed part of the nose root of the face is specified and recognized as the deep nose point P1. To do.
Next, regarding the apex P2 of the temple that forms the other fixed point Pf, the most depressed part of the temple part of the face is recognized as the apex P2 of the temple. The apex P2 of the temple may be a portion through which a straight line connecting the deep nose point P1 and the center of the pupil or the inner canthus of the outer end portion of the face in the left-right direction in front view passes.
 さらに、3つの頂点のうち、可動点Pmをなす頬上の頂点P3については、顔の頬の上部のうち、瞳の外側の垂直線上付近において最も隆起した部分を頬上の頂点P3として認識する。この際、図6に示すように、撮像データに等高線を投影することで、最も隆起した部分を頬上の頂点P3として認識する場合もある。
 この処理を左右両側で行うことで、左右一対の上側画定領域A1を構成する各頂点が認識される。
Further, of the three vertices, with respect to the apex P3 on the cheek which forms the movable point Pm, the highest portion of the upper cheek of the face near the vertical line outside the pupil is recognized as the apex P3 on the cheek. .. At this time, as shown in FIG. 6, by projecting the contour lines on the imaged data, the most raised portion may be recognized as the apex P3 on the cheek.
By performing this processing on both the left and right sides, the vertices that form the pair of left and right upper demarcating areas A1 are recognized.
 次に、図3に示すように、下側画定領域A2を構成する3つの頂点のうち、一方の固定点Pfをなす深鼻点P1については、顔の鼻下部のうち、最も窪んだ部分を鼻下点P4として認識する。
 次に、他方の固定点Pfをなす耳下点P5については、顔のうち、耳の下に位置する部分のうち、最も窪んだ部分を耳下点P5として認識する。
Next, as shown in FIG. 3, of the three vertices forming the lower demarcation area A2, for the deep nose point P1 that forms one fixed point Pf, the deepest part of the nose of the face is It is recognized as the point P4 below the nose.
Next, regarding the inferior ear point P5 that forms the other fixed point Pf, the most recessed portion of the portion of the face located under the ear is recognized as the inferior ear point P5.
 さらに、3つの頂点のうち、可動点Pmをなす頬下の頂点P6については、顔の頬の下部のうち、瞳の外側の垂直線上付近における、口角の横の膨らみにおいて、最も隆起した部分を頬下の頂点P6として認識する。頬下の頂点P6を認識する際にも、撮像データに等高線を投影することで、最も隆起した部分を頬下の頂点P6として認識する場合もある。
 この処理を左右両側で行うことで、左右一対の下側画定領域A2を構成する各頂点が認識される。
Further, of the three vertices, the lower cheek apex P6 forming the movable point Pm is the lowermost portion of the cheek of the face, which is the most raised portion in the lateral bulge of the corner of the mouth near the vertical line outside the pupil. It is recognized as the apex P6 under the cheek. When recognizing the apex P6 under the cheek, the contour line may be projected on the imaging data to recognize the most raised portion as the apex P6 under the cheek.
By performing this process on both the left and right sides, the respective vertices forming the pair of left and right lower demarcation areas A2 are recognized.
 なお、前述した各頂点の認識方法を採用せずに、例えば予め登録された複数の人の顔のデータにおける各頂点の位置と、撮像した撮像データと、を比較することで、それぞれの画定領域を構成する各頂点を識別するような画像処理を行ってもよい。
 また、最新の撮像データを、過去の撮像データと重ね合わせることで、各頂点の位置を特定してもよい。また、オペレータ60が、各頂点として適切な箇所を撮像データ上で選択することで、各頂点の位置を特定してもよい。
It should be noted that, without adopting the above-described method for recognizing each vertex, for example, by comparing the position of each vertex in the face data of a plurality of people registered in advance with the captured image data, each defined region Image processing may be performed so as to identify each vertex forming the.
Further, the position of each vertex may be specified by superimposing the latest image pickup data on the past image pickup data. Further, the operator 60 may specify the position of each vertex by selecting an appropriate location as each vertex on the imaged data.
 次に、領域画定部33Bが、頂点認識ステップにより特定された各頂点データを用いて、画定領域を画定する領域画定ステップ(S503)を行う。
 領域画定ステップでは、各頂点同士を結んだ直線により、三角形状をなす画定領域を画定する。
Next, the area demarcating unit 33B performs an area demarcating step (S503) of demarcating a demarcated area by using each vertex data identified by the vertex recognition step.
In the area defining step, a triangle defining area is defined by a straight line connecting the respective vertices.
 次に、面積算出部33Cが、領域画定ステップにより画定された画定領域の面積を算出する面積算出ステップ(S504)を行う。
 面積算出ステップでは、画定領域の面積を、各頂点の座標データを用いて算出する。
Next, the area calculation unit 33C performs an area calculation step (S504) of calculating the area of the demarcated area demarcated in the area demarcation step.
In the area calculation step, the area of the demarcated region is calculated using the coordinate data of each vertex.
 次に、面積比較部33Dが、面積算出ステップが算出した画定領域の面積を基準面積と比較する面積比較ステップ(S505)を行う。
 面積比較ステップでは、画定領域の面積と、この画定領域と対応する領域の面積として既知の基準面積と、を比較する。この説明では、過去の測定結果で得られた画定領域の面積を、基準面積として設定している。
Next, the area comparison unit 33D performs an area comparison step (S505) of comparing the area of the demarcated region calculated in the area calculation step with the reference area.
In the area comparison step, the area of the demarcated area is compared with a reference area known as the area of the area corresponding to the demarcated area. In this description, the area of the demarcated region obtained from the past measurement results is set as the reference area.
 最後に、表示処理部33Eが、比較結果を示す情報を出力する表示処理ステップ(S506)を行う。
 表示処理ステップでは、面積比較部33Dにより比較された画定領域の面積と、基準面積と、の比較結果を、装置側表示部34および端末側表示部45に表示する。また、比較結果には、今回の結果に対する知見や、今後ユーザ50が取り組む施策(顔のマッサージ等)を提案する情報が含まれていてもよい。なお、端末側表示部45には比較結果を表示しなくてもよい。
Finally, the display processing unit 33E performs a display processing step (S506) of outputting information indicating the comparison result.
In the display processing step, the comparison result between the area of the demarcated region compared by the area comparison unit 33D and the reference area is displayed on the device-side display unit 34 and the terminal-side display unit 45. In addition, the comparison result may include information about the result of this time and information that suggests a measure (a facial massage or the like) that the user 50 will work on in the future. The comparison result may not be displayed on the terminal side display unit 45.
 次に、図7および図8を用いて、美容促進システム1による評価の結果と、その効果について説明する。
 図7は、表示処理部33Eによる表示内容の一例を示す図であり、(a)2か月前の撮像データ、(b)評価時の撮像データである。また、図8は、表示処理部33Eによる表示内容の他の例を示す図であり、(a)2か月前の撮像データ、(b)評価時の撮像データである。
Next, the result of the evaluation by the beauty promotion system 1 and the effect thereof will be described with reference to FIGS. 7 and 8.
FIG. 7 is a diagram showing an example of display contents by the display processing unit 33E, which is (a) imaging data two months ago, and (b) imaging data at the time of evaluation. Further, FIG. 8 is a diagram showing another example of the display content by the display processing unit 33E, which is (a) imaging data two months ago, and (b) imaging data at the time of evaluation.
 図7に示す比較結果の一例では、2か月前と比較して、上側画定領域A1の面積が約23%、下側画定領域A2の面積が約33%減少した。これにより、若々しいふっくら感がでており、見た目の印象が向上していることが認められる。 In the example of the comparison result shown in FIG. 7, the area of the upper demarcation area A1 is reduced by about 23% and the area of the lower demarcation area A2 is reduced by about 33% as compared with two months ago. As a result, youthful and plump feeling appears, and it is recognized that the appearance impression is improved.
 図8に示す比較結果の他の例では、2か月前と比較して、上側画定領域A1の面積が約21.5%、下側画定領域A2の面積が約25%減少した。これにより、若々しいふっくら感がでて見た目の印象が向上していることが認められる。これにより、バランスが良く、たおやかな表情となっており、見た目の印象が向上していることが認められる。 In another example of the comparison result shown in FIG. 8, the area of the upper demarcated area A1 is reduced by about 21.5% and the area of the lower demarcated area A2 is reduced by about 25% compared to two months ago. It is recognized that this gives a youthful and plump feeling and improves the visual impression. As a result, it has a well-balanced and gentle look, and it is recognized that the appearance impression is improved.
 以上説明したように、本実施形態に係る美容促進システム1によれば、頂点認識部33Aが、ユーザ50の顔を撮像した撮像データから、顔の骨格に依存して特定される2つの固定点Pf、および顔の筋肉および脂肪に依存して特定される1つの可動点Pmそれぞれの位置を認識する。 As described above, according to the beauty promotion system 1 according to the present embodiment, the vertex recognition unit 33A determines two fixed points that are specified depending on the skeleton of the face from the imaged data of the face of the user 50. The position of each of the one movable point Pm specified depending on Pf and the muscle and fat of the face is recognized.
 次に、領域画定部33Bが、特定された各頂点を結んだ直線により、三角形状をなす画定領域を画定し、面積算出部33Cが画定領域の面積を算出する。そして、面積比較部33Dが、画定領域の面積と基準面積とを比較する。これにより、経年劣化および改善対策による顔のプロポーションの変化を定量的に評価して、美容促進に資することができる。 Next, the area demarcation unit 33B demarcates a demarcated area having a triangular shape by the straight line connecting the specified vertices, and the area calculation unit 33C calculates the area of the demarcated area. Then, the area comparison unit 33D compares the area of the demarcated region with the reference area. This makes it possible to quantitatively evaluate changes in the proportions of the face due to aging deterioration and improvement measures, and contribute to promoting beauty.
 またこの際、2つの固定点Pfと、1つの可動点Pmにより画定領域が画定されるので、例えば、2つ又は3つの可動点Pmにより画定領域が画定される構成と比較して、認識が難しい可動点Pmの位置の特定のばらつきを抑制し、正確な評価を行うことができる。 Further, in this case, since the demarcated area is demarcated by the two fixed points Pf and the one movable point Pm, for example, the recognition is performed in comparison with the configuration in which the demarcated area is demarcated by the two or three movable points Pm. It is possible to suppress a specific variation in the position of the difficult movable point Pm and perform accurate evaluation.
 そして、画定領域の面積を評価することで、例えば可動点Pmの位置を固定点Pfからの距離で評価する場合よりも、扱う数値を大きくして、変化量を大きくすることができる。
 これにより、仮に顔のプロポーションの変化が、一定期間の経過前後において、撮像データで確認してもわかりにくいような違いしか無いような場合であっても、ユーザ50が、顔の変化の程度を認識しやすくなり、美容促進に対する動機づけを得ることができる。
Then, by evaluating the area of the demarcated region, the amount of change can be increased by increasing the numerical value to be handled, compared with the case where the position of the movable point Pm is evaluated by the distance from the fixed point Pf.
As a result, even if the change in the proportion of the face has a difference that is difficult to understand even if it is confirmed by the imaging data before and after the elapse of a certain period, the user 50 determines the degree of the change in the face. It becomes easier to recognize and can be motivated to promote beauty.
 また、既存の顔のプロポーション分析では、骨格に依存したものがほとんど(いわゆる顔の黄金比率など)で、先天的な要因でどうにもできず、整形外科以外の美容への意欲を高めることは難しかった。一方、美容促進システム1では、骨格のみに依存せず、セルフケアによって改善の見込みがある、経年劣化による筋肉や脂肪の変化を評価するものであるため、ユーザの美容への意欲を高めることができる。 Moreover, in the existing proportion analysis of the face, most of them depended on the skeleton (so-called golden ratio of the face, etc.), and it was impossible to do so due to congenital factors, and it was difficult to increase motivation for cosmetics other than orthopedics. .. On the other hand, the beauty promotion system 1 evaluates changes in muscles and fats due to aging deterioration that are likely to be improved by self-care without depending only on the skeleton, so that the user's motivation for beauty can be increased. ..
 また、美容促進システム1は、画定領域の面積と基準面積とを比較した比較結果を示す情報を出力する表示処理部33Eを備えているので、定量的に評価した結果を、例えばユーザ50の携帯端末40に表示することで、評価結果を容易に確認することができる。 In addition, the beauty promotion system 1 includes the display processing unit 33E that outputs information indicating a comparison result obtained by comparing the area of the demarcated region with the reference area. Therefore, the result of the quantitative evaluation is, for example, carried by the user 50. By displaying the result on the terminal 40, the evaluation result can be easily confirmed.
 また、領域画定部33Bが、顔の正中線O1を基準にして、左右一対の画定領域を画定するので、左右に均整のとれた顔のプロポーションの実現に向けた美容促進を行うことができる。 Further, since the area demarcation unit 33B demarcates a pair of demarcated areas on the left and right with reference to the median line O1 of the face, it is possible to promote beauty for the realization of a proportion of the face that is proportionate to the left and right.
 また、領域画定部33Bが、深鼻点P1とこめかみの頂点P2とにより特定される2つの固定点Pf、および頬上の頂点P3により特定される1つの可動点Pmにより、画定領域を画定する。
 このため、顔の頬の上部周辺のプロポーションを定量的に評価して、例えば加齢とともに気になりやすい頬上部のたるみ(例えば、ほうれい線と頬骨との間に形成されるゴルゴライン等)の変化を確認することができる。
Further, the area demarcation unit 33B demarcates the demarcation area by the two fixed points Pf specified by the deep nose point P1 and the apex P2 of the temple, and the one movable point Pm specified by the apex P3 on the cheek. ..
Therefore, the proportions around the upper part of the cheek of the face are quantitatively evaluated, and for example, sagging of the upper part of the cheek that tends to be a concern with aging (for example, the Golgo line formed between the roche line and the cheekbone). You can check the change of.
 また、頂点認識部33Aが撮像データを3次元的に評価して、深鼻点P1、こめかみの頂点P2、および頬上の頂点P3を認識する。このため、ユーザ50の顔の造形によらず、各頂点を容易に認識することができる。 Further, the vertex recognition unit 33A three-dimensionally evaluates the imaging data, and recognizes the deep nose point P1, the temple vertex P2, and the cheek vertex P3. Therefore, each vertex can be easily recognized regardless of the modeling of the face of the user 50.
 また、領域画定部33Bが、顔の上下方向に間隔をあけて、2種類の画定領域を画定するので、顔の上側と下側とをそれぞれ評価することで、顔全体のプロポーションを定量的に評価することができ、より一層効果的に、美容促進を行うことができる。 Further, since the area demarcation unit 33B demarcates two types of demarcated areas at intervals in the vertical direction of the face, the proportions of the entire face can be quantitatively evaluated by evaluating the upper side and the lower side of the face, respectively. It is possible to evaluate, and it is possible to more effectively promote beauty.
 また、領域画定部33Bが、鼻下点P4と耳下点P5とにより特定される2つの固定点Pf、および頬下の頂点P6により特定される1つの可動点Pmにより、画定領域を画定するので、顔の頬の下部周辺のプロポーションを定量的に評価して、例えば加齢とともに気になりやすい頬の下部のたるみの変化を確認することができる。 Further, the area demarcation unit 33B demarcates the demarcation area by the two fixed points Pf specified by the inferior nose point P4 and the inferior ear point P5, and the one movable point Pm specified by the apex P6 of the inferior cheek. Therefore, it is possible to quantitatively evaluate the proportions around the lower part of the cheek of the face, and for example, to confirm the change in the sagging of the lower part of the cheek, which tends to be a concern with aging.
 また、頂点認識部33Aが、撮像データを3次元的に評価して、鼻下点P4、耳下点P5、および頬下の頂点P6を認識するので、ユーザ50の顔の造形によらず、各頂点を容易に認識することができる。 Further, since the vertex recognition unit 33A three-dimensionally evaluates the imaging data and recognizes the nose point P4, the ear point P5, and the cheek vertex P6, regardless of the facial shaping of the user 50, Each vertex can be easily recognized.
 また、面積比較部33Dが、基準面積として、前記撮像データの撮像時から一定期間前の時点でのユーザ50における画定領域の面積を用いる場合には、顔のプロポーションの時間の経過とともに変化する様子を定量的に評価することができる。これにより、美容効果を正確に把握することができる。 When the area comparison unit 33D uses, as the reference area, the area of the demarcated region of the user 50 at a time point before a certain period from the time when the image data is captured, the proportion of the face changes with the passage of time. Can be quantitatively evaluated. Thereby, the beauty effect can be accurately grasped.
 また、面積比較部33Dが、基準面積として、ユーザ50が望む顔としての理想モデルにおける画定領域の面積を用いる場合には、目標に対してどの程度近づいたのかを定量的に確認することができる。これにより、ユーザ50の美容に対するモチベーションを維持して、効果的な美容促進を行うことができる。 Further, when the area comparison unit 33D uses the area of the demarcated region in the ideal model as the face desired by the user 50 as the reference area, it is possible to quantitatively confirm how close the target is. .. As a result, the motivation of the user 50 for beauty can be maintained and effective beauty promotion can be performed.
 また、美容促進システム1は、美容促進装置30と、顔の撮像を行う撮像装置10と、を備えているので、容易にユーザ50の顔の撮像データを取得して、美容促進装置30により撮像データを評価することができる。 Further, the beauty promotion system 1 includes the beauty promotion device 30 and the image pickup device 10 for picking up an image of the face. Therefore, the image pickup data of the face of the user 50 can be easily acquired and the image is picked up by the beauty promotion device 30. Data can be evaluated.
(変形例)
 次に、図9から図13を用いて、第1実施形態に係る美容促進システム1の各種変形例について説明する。なお、ここで説明する複数の変形例では、頂点認識部が認識する頂点の数、および評価に用いる画定領域(幾何形状)が、第1実施形態と異なっている。
 各変形例の説明では、第1実施形態との相違点についてのみ説明し、共通する構成、および共通する効果については、その説明を省略する。
(Modification)
Next, various modifications of the beauty promotion system 1 according to the first embodiment will be described with reference to FIGS. 9 to 13. In the plurality of modified examples described here, the number of vertices recognized by the vertex recognition unit and the defined area (geometric shape) used for evaluation are different from those in the first embodiment.
In the description of each modification, only the differences from the first embodiment will be described, and description of common configurations and common effects will be omitted.
(変形例1)
 図9は、変形例に係る美容促進装置30の構成例を示すブロック図である。図10は、変形例1において、頂点認識部33Fによって認識する各頂点の一例を示す図であって、図10(a)は、撮像データの正面図、図10(b)は、撮像データの側面図である。
 変形例1では、頂点認識部33Fが認識する頂点の数と、それに基づいて認識される幾何形状が、第1実施形態と異なっている。
 図9および図10に示すように、変形例1では、頂点認識部33Fが、ユーザの顔を撮像した撮像データから、顔の上半分で、正中線を中心に、顔の左側と右側とのそれぞれで、1つの固定点(P1)および1つの可動点(P3)それぞれの位置を認識する。
(Modification 1)
FIG. 9: is a block diagram which shows the structural example of the beauty promotion apparatus 30 which concerns on a modification. FIG. 10 is a diagram showing an example of each apex recognized by the apex recognizing unit 33F in Modification Example 1. FIG. 10(a) is a front view of imaging data, and FIG. 10(b) is a drawing of imaging data. It is a side view.
In the first modification, the number of vertices recognized by the vertex recognition unit 33F and the geometric shape recognized based on the number are different from those in the first embodiment.
As shown in FIG. 9 and FIG. 10, in the first modification, the vertex recognition unit 33F detects the left and right sides of the face in the upper half of the face, centering on the midline, from the imaged data of the face of the user. Each recognizes the position of one fixed point (P1) and one movable point (P3).
 そして、幾何形状画定部33Gは、頂点認識部33Fが位置を認識した各頂点同士を周縁部に含む幾何形状を画定する。
 図示の例では、顔の上半分で、正中線を中心に、顔の左側と右側とのそれぞれで、周縁部に含む幾何形状として直線L1を画定する。幾何形状画定部33Gは、第1実施形態における領域画定部33Bと同様の機能を発揮する。
 そして、幾何形状画定部33Gは、顔の正中線を基準にして、左右一対の幾何形状を画定する。すなわち、幾何形状画定部33Gは、左右一対の直線L1を画定している。
Then, the geometric shape defining unit 33G defines a geometric shape including the respective vertices whose positions are recognized by the vertex recognizing unit 33F in the peripheral portion.
In the illustrated example, a straight line L1 is defined as a geometric shape that is included in the peripheral portion in the upper half of the face, centered on the midline, and on the left side and the right side of the face, respectively. The geometric shape defining portion 33G exhibits the same function as the area defining portion 33B in the first embodiment.
Then, the geometric shape defining portion 33G defines a pair of left and right geometric shapes based on the midline of the face. That is, the geometric shape defining portion 33G defines a pair of left and right straight lines L1.
 また、頂点認識部33Fは、ユーザの顔を撮像した撮像データから、顔の下半分で、正中線を中心に、顔の左側と右側とのそれぞれで、1つの固定点(P4)および1つの可動点(P6)それぞれの位置を認識する。
 そして、幾何形状画定部33Gは、顔の上下方向に間隔をあけて、2種類の幾何形状を画定する。すなわち、幾何形状画定部33Gは、周縁部に含む幾何形状として、顔の下半分で、正中線を中心に、顔の左側と右側とのそれぞれで、直線L2を画定する。幾何形状画定部33Gは、左右一対の直線L2を画定している。
In addition, the vertex recognition unit 33F determines one fixed point (P4) and one fixed point (P4) on the left half and the right half of the face in the lower half of the face from the imaged data of the face of the user. The position of each movable point (P6) is recognized.
Then, the geometric shape defining portion 33G defines two types of geometric shapes at intervals in the vertical direction of the face. That is, the geometric shape defining portion 33G defines the straight line L2 as the geometric shape included in the peripheral portion in the lower half of the face, centering on the midline and on the left side and the right side of the face, respectively. The geometric shape defining portion 33G defines a pair of left and right straight lines L2.
 そして、変形例1では、特徴量算出部33Hが、幾何形状の長さを示す特徴量を算出する。ここで、特徴量算出部33Hは、第1実施形態における面積算出部33Cと同様の機能を発揮する。
 すなわち、特徴量算出部33Hは、幾何形状が直線である場合には、直線の長さを算出し、図形である場合には、その面積を算出する。図示の例では、幾何形状が直線L1、L2であるため、それぞれの長さを算出する。
Then, in the modified example 1, the feature amount calculation unit 33H calculates the feature amount indicating the length of the geometric shape. Here, the feature amount calculation unit 33H exhibits the same function as the area calculation unit 33C in the first embodiment.
That is, the feature amount calculation unit 33H calculates the length of the straight line when the geometric shape is a straight line, and calculates the area of the straight line when the geometric shape is a figure. In the illustrated example, since the geometric shapes are the straight lines L1 and L2, the respective lengths are calculated.
 そして、特徴量比較部33Iは、特徴量算出部33Hが算出した特徴量と、幾何形状と対応する値として既知の基準値と、を比較する。すなわち、特徴量比較部33Iは、第1実施形態における面積比較部33Dと同様の機能を発揮する。
 ここで、データ記憶部32は、第1実施形態において記憶していた基準面積に代えて、基準値を記憶している。基準値とは、第1実施形態における基準面積の概念を直線にまで拡張した概念であり、直線の長さ又は領域の面積を示す。
Then, the feature amount comparison unit 33I compares the feature amount calculated by the feature amount calculation unit 33H with a reference value known as a value corresponding to the geometric shape. That is, the feature amount comparison unit 33I exhibits the same function as the area comparison unit 33D in the first embodiment.
Here, the data storage unit 32 stores a reference value instead of the reference area stored in the first embodiment. The reference value is a concept obtained by expanding the concept of the reference area in the first embodiment to a straight line, and indicates the length of the straight line or the area of the region.
 すなわち、第1実施形態と同様に、特徴量比較部33Iは、撮像データの撮像時から一定期間前の時点でのユーザにおける特徴量を既知の基準値として用いたり、ユーザが望む顔としての理想モデルにおける特徴量を、既知の基準値として用いたりすることができる。
 その結果は、表示処理部33Eが装置側表示部34および端末側表示部45に表示する。
That is, as in the first embodiment, the feature amount comparison unit 33I uses the feature amount of the user at a time point before a fixed period from the time of capturing the imaged data as a known reference value or an ideal face as the user desires. The feature amount in the model can be used as a known reference value.
The display processing unit 33E displays the result on the device-side display unit 34 and the terminal-side display unit 45.
 次に、図11を用いて、このような変形例1に係る美容促進システム1の処理フローについて説明する。図11は、変形例に係る美容促進装置30における処理フローを示す図である。
 図11に示すように、まず撮像装置10により、ユーザ50の顔を撮像する撮像ステップ(S601)を行う。
 次に、頂点認識部33Fが、撮像装置10から送信された撮像データを用いて各頂点を認識する頂点認識ステップ(S602)を行う。
Next, a processing flow of the beauty promotion system 1 according to Modification 1 will be described with reference to FIG. FIG. 11 is a diagram showing a processing flow in the beauty promotion device 30 according to the modification.
As shown in FIG. 11, first, the imaging device 10 performs an imaging step (S601) of imaging the face of the user 50.
Next, the vertex recognition unit 33F performs a vertex recognition step (S602) of recognizing each vertex by using the imaging data transmitted from the imaging device 10.
 次に、幾何形状画定部33Gが、頂点認識ステップにより特定された各頂点データを用いて、幾何形状を画定する幾何形状画定ステップ(S603)を行う。
 この変形例における幾何形状画定ステップでは、前述した通り、幾何形状画定部33Gが、直線L1、L2を左右一対ずつ画定する。
Next, the geometric shape defining unit 33G performs a geometric shape defining step (S603) of defining a geometric shape using each vertex data identified in the vertex recognizing step.
In the geometric shape defining step in this modified example, as described above, the geometric shape defining portion 33G defines the straight lines L1 and L2 in left and right pairs.
 次に、特徴量算出部33Hが、幾何形状画定ステップにより画定された幾何学形状の長さを算出する特徴量算出ステップ(S604)を行う。
 変形例1における特徴量算出ステップでは、幾何形状である直線L1、L2の長さを、各頂点の座標データを用いて算出する。
Next, the feature amount calculation unit 33H performs a feature amount calculation step (S604) of calculating the length of the geometric shape defined by the geometric shape definition step.
In the feature amount calculation step in the first modification, the lengths of the straight lines L1 and L2, which are geometric shapes, are calculated using the coordinate data of each vertex.
 次に、特徴量比較部33Iが、特徴量算出ステップが算出した幾何形状の特徴量を、既知の基準値と比較する特徴量比較ステップ(S605)を行う。
 特徴量比較ステップでは、特徴量の値と、既知の基準値と、を比較する。この説明では、過去の測定結果で得られた直線L1、L2の長さを、基準値として設定している。
Next, the feature amount comparison unit 33I performs a feature amount comparison step (S605) of comparing the feature amount of the geometric shape calculated by the feature amount calculation step with a known reference value.
In the feature quantity comparison step, the value of the feature quantity is compared with a known reference value. In this description, the lengths of the straight lines L1 and L2 obtained from the past measurement results are set as the reference value.
 最後に、表示処理部33Eが、比較結果を示す情報を出力する表示処理ステップ(S606)を行う。
 表示処理ステップでは、特徴量比較部33Iにより比較された幾何形状の特徴量と、基準値と、の比較結果を、装置側表示部34および端末側表示部45に表示する。また、比較結果には、今回の結果に対する知見や、今後ユーザ50が取り組む施策(顔のマッサージ等)を提案する情報が含まれていてもよい。なお、端末側表示部45には比較結果を表示しなくてもよい。
 そして、このような比較を行うことにより、前述した第1実施形態と同様に、経年劣化および改善対策による顔のプロポーションの変化を定量的に評価して、美容促進に資することができる。
Finally, the display processing unit 33E performs a display processing step (S606) of outputting information indicating the comparison result.
In the display processing step, the comparison result between the feature amount of the geometrical shape compared by the feature amount comparing unit 33I and the reference value is displayed on the device side display unit 34 and the terminal side display unit 45. In addition, the comparison result may include information about the result of this time and information that suggests a measure (a facial massage or the like) that the user 50 will work on in the future. The comparison result may not be displayed on the terminal side display unit 45.
Then, by making such a comparison, it is possible to quantitatively evaluate the change in the proportion of the face due to aging deterioration and improvement measures, and to contribute to the promotion of beauty, as in the first embodiment described above.
(変形例2)
 次に、図12を用いて、変形例2に係る美容促進システム1について説明する。図12は、変形例2において、頂点認識部33Fによって認識する各頂点の一例を示す図であって、図12(a)は、撮像データの正面図、図12(b)は、撮像データの側面図である。
(Modification 2)
Next, the beauty promotion system 1 according to the second modification will be described with reference to FIG. FIG. 12 is a diagram showing an example of each apex recognized by the apex recognizing unit 33F in Modification Example 2. FIG. 12(a) is a front view of imaging data, and FIG. 12(b) is a drawing of imaging data. It is a side view.
 図12に示すように、変形例2では、頂点認識部33Fが、ユーザの顔を撮像した撮像データから、4つの固定点(P1、P2,P4、P5)および1つの可動点(P6)それぞれの位置を認識する。
 そして、幾何形状画定部33Gは、頂点認識部33Fが位置を認識した各頂点同士を周縁部に含む幾何形状を画定する。ここで、図示の例では、幾何形状として、各頂点同士を結んだ五角形A1の領域を画定している。この変形例では、顔全体から左右一対の五角形A1の幾何形状を画定している。
As shown in FIG. 12, in the second modification, the vertex recognition unit 33F extracts four fixed points (P1, P2, P4, P5) and one movable point (P6) from the imaged data of the user's face. Recognize the position of.
Then, the geometric shape defining unit 33G defines a geometric shape including the respective vertices whose positions are recognized by the vertex recognizing unit 33F in the peripheral portion. Here, in the illustrated example, the area of a pentagon A1 connecting the vertices is defined as the geometric shape. In this modification, a pair of left and right pentagons A1 are defined from the entire face.
 そして、特徴量算出部33Hは、特徴量の算出として、幾何形状としての五角形A1の面積を算出する。すなわち、特徴量算出部33Hは、幾何形状が直線である場合には、長さを特徴量とし、幾何形状が図形である場合には、面積を特徴量として算出する。
 そして、特徴量比較部33Iは、特徴量算出部33Hが算出した特徴量と、幾何形状と対応する値として既知の基準値と、を比較する。
 なお、上記の説明では、可動点が一つである構成を示したが、五角形A1を画定する際に、頂点認識部33Fが、可動点を二つ以上認識してもよい。また、幾何形状画定部33Gは、顔の上半分と下半分とで、異なる五角形を画定してもよい。
Then, the feature amount calculation unit 33H calculates the area of the pentagon A1 as a geometric shape as the calculation of the feature amount. That is, the feature amount calculation unit 33H calculates the length as the feature amount when the geometric shape is a straight line and the area as the feature amount when the geometric shape is a figure.
Then, the feature amount comparison unit 33I compares the feature amount calculated by the feature amount calculation unit 33H with a reference value known as a value corresponding to the geometric shape.
In the above description, the configuration has one movable point, but the vertex recognition unit 33F may recognize two or more movable points when defining the pentagon A1. The geometric shape defining portion 33G may define different pentagons in the upper half and the lower half of the face.
(変形例3)
 次に、図13を用いて、変形例3に係る美容促進装置30について説明する。図13は、変形例3において、頂点認識部33Fによって認識する各頂点の一例を示す図であって、図13(a)は、撮像データの正面図、図13(b)は、撮像データの側面図である。
(Modification 3)
Next, the beauty promotion device 30 according to the third modification will be described with reference to FIG. 13A and 13B are diagrams showing an example of each vertex recognized by the vertex recognition unit 33F in Modification 3, where FIG. 13A is a front view of the captured data and FIG. 13B is a diagram of the captured data. It is a side view.
 図13に示すように、変形例3では、頂点認識部33Fが、ユーザの顔を撮像した撮像データから、2つの固定点(P2、P5)、および1つの可動点(P6)それぞれの位置を認識する。
 そして、幾何形状画定部33Gは、頂点認識部33Fが位置を認識した各頂点同士を周縁部に含む幾何形状として、円C1の領域を画定している。幾何形状画定部33Gは、顔全体から左右一対の円C1を画定する。
As shown in FIG. 13, in Modification 3, the vertex recognition unit 33F determines the positions of two fixed points (P2, P5) and one movable point (P6) from the imaged data of the user's face. recognize.
Then, the geometric shape defining portion 33G defines the area of the circle C1 as a geometric shape including the respective vertices whose positions are recognized by the vertex recognizing portion 33F in the peripheral portion. The geometric shape defining portion 33G defines a pair of left and right circles C1 from the entire face.
 そして、特徴量算出部33Hが、円の面積を算出し、特徴量比較部33Iは、特徴量算出部33Hが算出した特徴量と、幾何形状と対応する値として既知の基準値と、を比較する。
 なお、上記の説明では、可動点が一つである構成を示したが、円C1を画定する際に、頂点認識部33Fが、可動点を二つ認識してもよい。また、幾何形状画定部33Gは、顔の上半分と下半分とで、異なる円を画定してもよい。
 このように、評価する幾何形状は直線、円、多角形のいずれであってもよい。
Then, the feature amount calculation unit 33H calculates the area of the circle, and the feature amount comparison unit 33I compares the feature amount calculated by the feature amount calculation unit 33H with a reference value known as a value corresponding to the geometric shape. To do.
In the above description, the configuration in which there is one movable point is shown, but the vertex recognition unit 33F may recognize two movable points when defining the circle C1. Further, the geometric shape defining portion 33G may define different circles for the upper half and the lower half of the face.
As described above, the geometric shape to be evaluated may be a straight line, a circle, or a polygon.
(第2実施形態)
 次に、図14および図15を用いて、第2実施形態に係る美容促進システムについて説明する。図14は、本発明の第2実施形態に係る美容促進システム1Bの使用状態を説明する図である。図15は、第2実施形態に係る美容促進装置30Bの構成例を示すブロック図である。
(Second embodiment)
Next, a beauty promotion system according to the second embodiment will be described with reference to FIGS. 14 and 15. FIG. 14: is a figure explaining the use condition of the beauty promotion system 1B which concerns on 2nd Embodiment of this invention. FIG. 15 is a block diagram showing a configuration example of the beauty promotion device 30B according to the second embodiment.
 なお、第2実施形態では、装置全体としての構成が、第1実施形態と異なっている。第2実施形態の説明では、第1実施形態との相違点についてのみ説明し、共通する構成、および共通する効果については、その説明を省略する。 In the second embodiment, the overall configuration of the device is different from that of the first embodiment. In the description of the second embodiment, only the differences from the first embodiment will be described, and description of common configurations and common effects will be omitted.
 図13に示すように、第2実施形態に係る美容促進システム1Bは、撮像した内容を鏡面加工された表示面に表示することで、鏡の機能とディスプレイの機能とを併せ持つスマートミラーであり、撮像部を有する撮像装置10が、美容促進システム1Bを構成する美容促進装置30Bの内部に内蔵されている。
 撮像装置10の撮像部は、装置側表示部34における表示面の上部から、ユーザが位置する前面を撮像している。
 美容促進装置30Bの装置側表示部34は、表示面が鏡面加工されている。
As shown in FIG. 13, the beauty promotion system 1B according to the second embodiment is a smart mirror that has both a mirror function and a display function by displaying the captured content on a mirror-finished display surface. The imaging device 10 having an imaging unit is built in the beauty promotion device 30B that constitutes the beauty promotion system 1B.
The image capturing section of the image capturing apparatus 10 captures an image of the front surface on which the user is located, from the upper portion of the display surface of the apparatus side display section 34.
The display surface of the device-side display unit 34 of the beauty promotion device 30B is mirror-finished.
 このため、表示面に何も表示していないときには、装置側表示部34は鏡として使用することができる。
 そして、装置側表示部34が、表示面に向かい合うユーザの顔を撮像した撮像データを、表示面に表示可能する機能を有する。このため、撮像装置10が撮像したユーザの顔の撮像データや、撮像データを用いて装置処理部33が行ったプロポーションの変化の評価結果を、鏡を見るようにしてユーザが確認することができる。
 これにより、ユーザが身だしなみを整えるタイミングで、顔のプロポーションの変化を確認することができ、ユーザの利便性を確保することができる。
Therefore, when nothing is displayed on the display surface, the device-side display unit 34 can be used as a mirror.
The device-side display unit 34 has a function of displaying on the display surface the imaged data of the face of the user facing the display surface. Therefore, the user can confirm the imaged data of the user's face imaged by the imaging device 10 and the evaluation result of the change in the proportion performed by the device processing unit 33 using the imaged data by looking at the mirror. ..
As a result, it is possible to confirm the change in the proportions of the face at the timing when the user adjusts his appearance, and it is possible to ensure the convenience of the user.
 また、美容促進装置30Bの使用態様としては、顔の左右半分の一方側を鏡として使用し、他方側で撮像データを表示してもよいし、前面を鏡として使用しながら、その表示面に評価結果を上書きしてもよい。また、理想的な顔のプロポーションに加工したデータを、重ねて表示してもよい。 As a usage mode of the beauty promotion device 30B, one side of the left and right half of the face may be used as a mirror and the image data may be displayed on the other side, or the front side may be used as a mirror while the display surface is being displayed. The evaluation result may be overwritten. Further, the data processed into the ideal face proportions may be displayed in an overlapping manner.
 なお、美容促進システム1は、上記実施形態に限定されるものではなく、他の手法により実現されてもよいことは言うまでもない。以下、各種変形例について説明する。
 例えば、上記実施形態の制御プログラムは、コンピュータに読み取り可能な記憶媒体に記憶された状態で提供されてもよい。記憶媒体は、「一時的でない有形の媒体」に、制御プログラムを記憶可能である。記憶媒体は、HDDやSDDなどの任意の適切な記憶媒体、またはこれらの2つ以上の適切な組合せを含むことができる。記憶媒体は、揮発性、不揮発性、または揮発性と不揮発性の組合せでよい。なお、記憶媒体はこれらの例に限られず、制御プログラムを記憶可能であれば、どのようなデバイスまたは媒体であってもよい。
Needless to say, the beauty promotion system 1 is not limited to the above embodiment and may be realized by another method. Various modifications will be described below.
For example, the control program of the above embodiment may be provided in a state of being stored in a computer-readable storage medium. The storage medium can store the control program in a “non-transitory tangible medium”. Storage media may include any suitable storage media such as HDD or SDD, or any suitable combination of two or more thereof. The storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile. The storage medium is not limited to these examples, and may be any device or medium as long as it can store the control program.
 また、美容促進システム1は、例えば、記憶媒体に記憶された制御プログラムを読み出し、読み出した制御プログラムを実行することによって、実施形態に示す各機能を実現することができる。
 また、当該制御プログラムは、任意の伝送媒体(通信ネットワークや放送波等)を介して、美容促進システム1に実装されるアプリケーションプログラムであってもよい。美容促進システム1は、例えば、インターネット等を介してダウンロードした制御プログラムを実行することにより、各実施形態に示す複数の機能部の機能を実現する。
In addition, the beauty promotion system 1 can realize each function shown in the embodiment by, for example, reading a control program stored in a storage medium and executing the read control program.
Further, the control program may be an application program installed in the beauty promotion system 1 via an arbitrary transmission medium (communication network, broadcast wave, etc.). The beauty promotion system 1 realizes the functions of the plurality of functional units shown in the respective embodiments by executing a control program downloaded via the Internet or the like, for example.
 また、当該制御プログラムは、例えば、ActionScript、JavaScript(登録商標)などのスクリプト言語、Objective―C、Java(登録商標)などのオブジェクト指向プログラミング言語、HTML5などのマークアップ言語などを用いて実装してもよい。 The control program is implemented by using a script language such as ActionScript or JavaScript (registered trademark), an object-oriented programming language such as Objective-C or Java (registered trademark), or a markup language such as HTML5. Good.
 美容促進システム1における処理の少なくとも一部は、1以上のコンピュータにより構成されるクラウドコンピューティングにより実現されていてもよい。また、美容促進システム1の各機能部は、上記実施形態に示した機能を実現する1または複数の回路によって実現されてもよく、1の回路により複数の機能部の機能が実現されることとしてもよい。 At least a part of the processing in the beauty promotion system 1 may be realized by cloud computing configured by one or more computers. Further, each functional unit of the beauty promotion system 1 may be realized by one or a plurality of circuits that realize the functions shown in the above embodiments, and one circuit may realize the functions of a plurality of functional units. Good.
 また、本開示の実施形態を諸図面や実施例に基づき説明してきたが、当業者であれば本開示に基づき種々の変形や修正を行うことが容易であることに注意されたい。従って、これらの変形や修正は本開示の範囲に含まれることに留意されたい。例えば、各手段、各ステップ等に含まれる機能等は論理的に矛盾しないように再配置可能であり、複数の手段やステップ等を1つに組み合わせたり、或いは分割したりすることが可能である。また、各実施形態に示す構成を適宜組み合わせることとしてもよい。 Although the embodiments of the present disclosure have been described based on the drawings and the examples, it should be noted that those skilled in the art can easily make various variations and modifications based on the present disclosure. Therefore, it should be noted that these variations and modifications are included in the scope of the present disclosure. For example, the functions and the like included in each means and each step can be rearranged so as not to logically contradict, and a plurality of means and steps can be combined or divided into one. .. Further, the configurations shown in the respective embodiments may be combined appropriately.
(付記)
 本発明の美容促進方法は、コンピュータが、ユーザの顔を撮像した撮像データから、前記顔の骨格に依存して特定される少なくとも1つの固定点、および前記顔の筋肉および脂肪に依存して特定される1つの可動点それぞれの位置を認識する頂点認識ステップと、前記頂点認識ステップにより位置を認識した各頂点同士を周縁部に含む幾何形状を確定する幾何形状画定ステップと、前記幾何形状の長さ又は面積を示す特徴量を算出する特徴量算出ステップと、前記特徴量算出ステップが算出した前記特徴量と、前記幾何形状と対応する値として既知の特徴量と、を比較する特徴量比較ステップと、を実行する。
(Appendix)
In the beauty promotion method of the present invention, the computer determines, from the imaged data of the user's face, at least one fixed point that is specified depending on the skeleton of the face and the muscle and fat of the face. A step of recognizing the position of each of the one movable point, a step of defining a geometrical shape including the vertices whose positions are recognized in the step of recognizing the vertexes in a peripheral portion, and a length of the geometrical shape. Feature amount calculating step of calculating a feature amount indicating a size or area, a feature amount comparing step of comparing the feature amount calculated in the feature amount calculating step with a feature amount known as a value corresponding to the geometric shape. And execute.
 また、前記頂点認識ステップでは、2つの前記固定点と、1つの前記可動点と、を認識し、前記幾何形状画定ステップは、前記頂点認識ステップが位置を認識した各頂点同士を結んだ直線により、三角形状をなす画定領域を画定する領域画定ステップであり、前記特徴量算出ステップは、前記画定領域の面積を算出する面積算出ステップであり、前記特徴量比較ステップは、前記面積算出ステップが算出した前記画定領域の面積と、前記画定領域と対応する領域の面積として既知の基準面積と、を比較する面積比較ステップであってもよい。 Further, the vertex recognition step recognizes two fixed points and one movable point, and the geometric shape defining step uses a straight line connecting the respective vertices whose positions are recognized by the vertex recognition step. , A region defining step of defining a demarcated region having a triangular shape, the feature amount calculating step is an area calculating step of calculating an area of the defined region, and the feature amount comparing step is calculated by the area calculating step. The area comparison step of comparing the area of the defined area with a reference area known as the area of the area corresponding to the defined area may be performed.
 本発明の美容促進プログラムは、コンピュータに、ユーザの顔を撮像した撮像データから、前記顔の骨格に依存して特定される少なくとも1つの固定点、および前記顔の筋肉および脂肪に依存して特定される1つの可動点それぞれの位置を認識する頂点認識機能と、前記頂点認識機能が位置を認識した各頂点同士を周縁部に含む幾何形状を確定する幾何形状画定機能と、前記幾何形状の長さ又は面積を示す特徴量を算出する特徴量算出機能と、前記特徴量算出機能が算出した前記特徴量と、前記幾何形状と対応する値として既知の特徴量と、を比較する特徴量比較機能と、を実現させる。 The beauty promotion program of the present invention causes a computer to specify, from image data obtained by imaging a user's face, at least one fixed point that is specified depending on the skeleton of the face and the muscle and fat of the face. A vertex recognition function for recognizing the position of each one of the movable points, a geometric shape defining function for determining a geometric shape including the vertices whose positions are recognized by the vertex recognition function in a peripheral portion, and a length of the geometric shape. Feature amount calculating function for calculating a feature amount indicating a size or area, a feature amount comparing function for comparing the feature amount calculated by the feature amount calculating function, and a feature amount known as a value corresponding to the geometric shape. And realize.
 また、前記頂点認識機能では、2つの前記固定点と、1つの前記可動点と、を認識し、前記幾何形状画定機能は、前記頂点認識機能が位置を認識した各頂点同士を結んだ直線により、三角形状をなす画定領域を画定する領域画定機能であり、前記特徴量算出機能は、前記画定領域の面積を算出する面積算出機能であり、前記特徴量比較機能は、前記面積算出機能が算出した前記画定領域の面積と、前記画定領域と対応する領域の面積として既知の基準面積と、を比較する面積比較機能であってもよい。 Further, the vertex recognition function recognizes two fixed points and one movable point, and the geometric shape defining function uses a straight line connecting the vertices whose positions are recognized by the vertex recognition function. , A region demarcation function for demarcating a demarcated region having a triangular shape, the feature amount calculation function is an area calculation function for calculating the area of the demarcated region, and the feature amount comparison function is calculated by the area calculation function. The area comparison function of comparing the area of the defined area with a reference area known as the area of the area corresponding to the defined area may be used.
1  美容促進システム
10 撮像装置
20 ネットワーク
30 美容促進装置
40 携帯端末
50 ユーザ
60 オペレータ
1 Beauty Promotion System 10 Imaging Device 20 Network 30 Beauty Promotion Device 40 Mobile Terminal 50 User 60 Operator

Claims (17)

  1.  ユーザの顔を撮像した撮像データから、前記顔の骨格に依存して特定される少なくとも1つの固定点、および前記顔の筋肉および脂肪に依存して特定される少なくとも1つの可動点それぞれの位置を認識する頂点認識部と、
     前記頂点認識部が位置を認識した各頂点同士を周縁部に含む幾何形状を画定する幾何形状画定部と、
     前記幾何形状の長さ又は面積を示す特徴量を算出する特徴量算出部と、
     前記特徴量算出部が算出した前記特徴量と、前記幾何形状と対応する値として既知の基準値と、を比較する特徴量比較部と、を備えている美容促進装置。
    The positions of at least one fixed point that is specified depending on the skeleton of the face and at least one movable point that is specified depending on the muscles and fat of the face are determined from the imaged data of the user's face. A vertex recognition unit for recognition,
    A geometric shape defining portion that defines a geometric shape including the respective vertices whose positions are recognized by the vertex recognizing portion in a peripheral portion,
    A feature amount calculation unit that calculates a feature amount indicating the length or area of the geometric shape,
    A beauty promotion device comprising: a feature amount comparison unit that compares the feature amount calculated by the feature amount calculation unit with a reference value known as a value corresponding to the geometric shape.
  2.  前記頂点認識部は、2つの前記固定点と、1つの前記可動点と、を認識し、
     前記幾何形状画定部は、前記頂点認識部が位置を認識した各頂点同士を結んだ直線により、三角形状をなす画定領域を画定する領域画定部であり、
     前記特徴量算出部は、前記特徴量として、前記画定領域の面積を算出する面積算出部であり、
     前記特徴量比較部は、前記特徴量と前記既知の基準値との比較として、前記面積算出部が算出した前記画定領域の面積と、前記画定領域と対応する領域の面積として既知の基準面積と、を比較する面積比較部であることを特徴とする請求項1に記載の美容促進装置。
    The vertex recognition unit recognizes two fixed points and one movable point,
    The geometric shape demarcating unit is a region demarcating unit that demarcates a demarcating region having a triangular shape by a straight line connecting the respective vertices whose positions are recognized by the vertex recognizing unit,
    The feature amount calculation unit is an area calculation unit that calculates the area of the demarcated region as the feature amount,
    The feature amount comparison unit compares the feature amount with the known reference value, and the area of the demarcated region calculated by the area calculation unit, and a reference area known as the area of the region corresponding to the demarcated region. The beauty promotion device according to claim 1, wherein the beauty promotion device is an area comparison unit for comparing
  3.  前記特徴量比較部により、前記特徴量と、前記基準値と、を比較した比較結果を示す情報を出力する表示処理部を備えていることを特徴とする請求項1又は2に記載の美容促進装置。 The beauty promotion according to claim 1 or 2, further comprising: a display processing unit that outputs information indicating a comparison result obtained by comparing the feature amount with the reference value by the feature amount comparing unit. apparatus.
  4.  前記幾何形状画定部は、前記顔の正中線を基準にして、左右一対の前記幾何形状を画定することを特徴とする請求項1から3のいずれか1項に記載の美容促進装置。 The beauty promotion device according to any one of claims 1 to 3, wherein the geometric shape defining portion defines a pair of left and right geometric shapes based on a midline of the face.
  5.  前記頂点認識部は、
     2つの前記固定点として、深鼻点とこめかみの頂点とにより特定される各頂点を認識し、
     1つの前記可動点として、頬上の頂点を認識することを特徴とする請求項2から4のいずれか1項に記載の美容促進装置。
    The vertex recognition unit,
    As the two fixed points, each vertex identified by the deep nose point and the temple vertex is recognized,
    The beauty promotion device according to claim 2, wherein a vertex on the cheek is recognized as one of the movable points.
  6.  前記頂点認識部は、前記撮像データを3次元的に評価して、
     前記顔の鼻根部のうち、最も窪んだ部分を前記深鼻点として認識し、
     前記顔のこめかみ部分のうち、最も窪んだ部分を前記こめかみの頂点として認識し、
     前記顔の頬の上部のうち、瞳の外側の垂直線上付近において最も隆起した部分を前記頬上の頂点として認識することを特徴とする請求項5に記載の美容促進装置。
    The vertex recognition unit evaluates the imaged data three-dimensionally,
    Of the nose root of the face, recognize the most depressed part as the deep nose point,
    Among the temples of the face, recognize the most depressed part as the apex of the temple,
    The beauty promotion device according to claim 5, wherein a part of the upper part of the cheek of the face, which is most raised in the vicinity of a vertical line outside the pupil, is recognized as a vertex on the cheek.
  7.  前記幾何形状画定部は、前記顔の上下方向に間隔をあけて、2種類の前記幾何形状を画定することを特徴とする請求項1から6のいずれか1項に記載の美容促進装置。 The beauty promotion device according to any one of claims 1 to 6, wherein the geometric shape defining portion defines two types of the geometric shapes at intervals in the vertical direction of the face.
  8.  前記頂点認識部は、
     2つの前記固定点として、鼻下点と耳下点とにより特定される各頂点を認識し、
     1つの前記可動点として、頬下の頂点を認識することを特徴とする請求項2から7のいずれか1項に記載の美容促進装置。
    The vertex recognition unit,
    As each of the two fixed points, each vertex identified by a subnasal point and a parotid point is recognized,
    The beauty promoting device according to claim 2, wherein a lower cheek is recognized as one of the movable points.
  9.  前記頂点認識部は、前記撮像データを3次元的に評価して、
     前記顔の鼻下部のうち、最も窪んだ部分を前記鼻下点として認識し、
     前記顔のうち、耳の下に位置する部分のうち、最も窪んだ部分を前記耳下点として認識し、
     前記顔の頬の下部のうち、瞳の外側の垂直線上付近における、口角の横の膨らみにおいて、最も隆起した部分を前記頬下の頂点として認識することを特徴とする請求項8に記載の美容促進装置。
    The vertex recognition unit evaluates the imaged data three-dimensionally,
    Of the lower part of the nose of the face, recognize the most depressed part as the lower nose,
    Of the part located below the ears of the face, the most recessed part is recognized as the parietal point,
    9. The beauty according to claim 8, wherein, of the lower part of the cheek of the face, in the bulge lateral to the corner of the mouth in the vicinity of the vertical line outside the pupil, the most raised part is recognized as the apex of the lower cheek. Promotion device.
  10.  前記特徴量比較部は、前記基準値として、前記撮像データの撮像時から一定期間前の時点での前記ユーザにおける前記特徴量を用いることを特徴とする請求項1から9のいずれか1項に記載の美容促進装置。 10. The feature quantity comparing unit uses, as the reference value, the feature quantity of the user at a time point a certain period before the time when the image data was imaged, according to any one of claims 1 to 9. The described beauty promotion device.
  11.  前記特徴量比較部は、前記基準値として、前記ユーザが望む顔としての理想モデルにおける前記特徴量を用いることを特徴とする請求項1から9のいずれか1項に記載の美容促進装置。 10. The beauty promotion device according to claim 1, wherein the feature amount comparison unit uses the feature amount in an ideal model as a face desired by the user as the reference value.
  12.  前記請求項1から11のいずれか1項に記載の美容促進装置と、
     前記ユーザの顔を撮像する撮像部、および前記撮像部が撮像した撮像データを前記美容促進装置に送信する送信部を備えた撮像装置と、を備え、
     前記美容促進装置は、さらに前記撮像データを受信する受信部を備える美容促進システム。
    The beauty promotion device according to any one of claims 1 to 11,
    An imaging device that captures the face of the user; and an imaging device that includes a transmission unit that transmits the imaging data captured by the imaging unit to the beauty promotion device,
    The beauty promotion device further includes a receiving unit that receives the imaging data.
  13.  内部に前記撮像部が内蔵されるとともに、表示面が鏡面加工された装置側表示部を備え、
     前記装置側表示部が、前記表示面に向かい合うユーザの顔を撮像した撮像データを、前記表示面に表示可能なスマートミラーであることを特徴とする請求項12記載の美容促進システム。
    The image pickup section is built in the inside, and a device-side display section whose display surface is mirror-finished is provided,
    13. The beauty promotion system according to claim 12, wherein the device-side display unit is a smart mirror capable of displaying image pickup data of the face of the user facing the display surface on the display surface.
  14.  コンピュータが、
     ユーザの顔を撮像した撮像データから、前記顔の骨格に依存して特定される少なくとも1つの固定点、および前記顔の筋肉および脂肪に依存して特定される1つの可動点それぞれの位置を認識する頂点認識ステップと、
     前記頂点認識ステップにより位置を認識した各頂点同士を周縁部に含む幾何形状を確定する幾何形状画定ステップと、
     前記幾何形状の長さ又は面積を示す特徴量を算出する特徴量算出ステップと、
     前記特徴量算出ステップが算出した前記特徴量と、前記幾何形状と対応する値として既知の特徴量と、を比較する特徴量比較ステップと、を実行する美容促進方法。
    Computer
    Recognizing the positions of at least one fixed point that is specified depending on the skeleton of the face and one movable point that is specified depending on the muscles and fat of the face from the imaged data of the user's face. A vertex recognition step
    A geometrical shape defining step of defining a geometrical shape including the respective vertices whose positions are recognized by the apex recognizing step in a peripheral portion;
    A feature amount calculation step of calculating a feature amount indicating the length or area of the geometric shape,
    A beauty promotion method, comprising: a feature quantity comparing step of comparing the feature quantity calculated by the feature quantity calculating step with a feature quantity known as a value corresponding to the geometric shape.
  15.  前記頂点認識ステップでは、2つの前記固定点と、1つの前記可動点と、を認識し、
     前記幾何形状画定ステップは、前記頂点認識ステップが位置を認識した各頂点同士を結んだ直線により、三角形状をなす画定領域を画定する領域画定ステップであり、
     前記特徴量算出ステップは、前記画定領域の面積を算出する面積算出ステップであり、
     前記特徴量比較ステップは、前記面積算出ステップが算出した前記画定領域の面積と、前記画定領域と対応する領域の面積として既知の基準面積と、を比較する面積比較ステップであることを特徴とする請求項14に記載の美容促進方法。
    In the apex recognition step, two fixed points and one movable point are recognized,
    The geometric shape defining step is an area defining step of defining a defining area having a triangular shape by a straight line connecting the vertices whose positions are recognized by the vertex recognizing step,
    The feature amount calculating step is an area calculating step of calculating an area of the demarcated region,
    The feature amount comparing step is an area comparing step of comparing the area of the demarcated area calculated by the area calculating step with a reference area known as the area of the area corresponding to the demarcated area. The beauty promotion method according to claim 14.
  16.  コンピュータに、
     ユーザの顔を撮像した撮像データから、前記顔の骨格に依存して特定される少なくとも1つの固定点、および前記顔の筋肉および脂肪に依存して特定される1つの可動点それぞれの位置を認識する頂点認識機能と、
     前記頂点認識機能が位置を認識した各頂点同士を周縁部に含む幾何形状を確定する幾何形状画定機能と、
     前記幾何形状の長さ又は面積を示す特徴量を算出する特徴量算出機能と、
     前記特徴量算出機能が算出した前記特徴量と、前記幾何形状と対応する値として既知の特徴量と、を比較する特徴量比較機能と、を実現させる美容促進プログラム。
    On the computer,
    Recognizing the positions of at least one fixed point that is specified depending on the skeleton of the face and one movable point that is specified depending on the muscles and fat of the face from the imaged data of the user's face. Vertex recognition function
    A geometric shape defining function for establishing a geometric shape including the respective vertices whose positions are recognized by the vertex recognition function in a peripheral portion,
    A feature amount calculation function for calculating a feature amount indicating the length or area of the geometric shape,
    A beauty promotion program that realizes a feature quantity comparison function of comparing the feature quantity calculated by the feature quantity calculation function with a feature quantity known as a value corresponding to the geometric shape.
  17.  前記頂点認識機能では、2つの前記固定点と、1つの前記可動点と、を認識し、
     前記幾何形状画定機能は、前記頂点認識機能が位置を認識した各頂点同士を結んだ直線により、三角形状をなす画定領域を画定する領域画定機能であり、
     前記特徴量算出機能は、前記画定領域の面積を算出する面積算出機能であり、
     前記特徴量比較機能は、前記面積算出機能が算出した前記画定領域の面積と、前記画定領域と対応する領域の面積として既知の基準面積と、を比較する面積比較機能であることを特徴とする請求項16に記載の美容促進プログラム。
    The vertex recognition function recognizes two fixed points and one movable point,
    The geometric shape defining function is an area defining function that defines a triangular defining area by a straight line connecting the vertices whose positions are recognized by the vertex recognizing function,
    The feature amount calculation function is an area calculation function for calculating the area of the demarcated region,
    The feature amount comparison function is an area comparison function of comparing the area of the demarcated region calculated by the area calculation function with a reference area known as the area of the region corresponding to the demarcated region. The beauty promotion program according to claim 16.
PCT/JP2019/012588 2018-12-03 2019-03-25 Beauty promotion device, beauty promotion system, beauty promotion method, and beauty promotion program WO2020115922A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2019557510A JP6710883B1 (en) 2018-12-03 2019-03-25 Beauty promotion device, beauty promotion system, beauty promotion method, and beauty promotion program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018226760 2018-12-03
JP2018-226760 2018-12-03

Publications (1)

Publication Number Publication Date
WO2020115922A1 true WO2020115922A1 (en) 2020-06-11

Family

ID=70974225

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/012588 WO2020115922A1 (en) 2018-12-03 2019-03-25 Beauty promotion device, beauty promotion system, beauty promotion method, and beauty promotion program

Country Status (3)

Country Link
JP (1) JP6710883B1 (en)
TW (1) TWI731447B (en)
WO (1) WO2020115922A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022224511A1 (en) * 2021-04-22 2022-10-27 B-by-C株式会社 Facial care assistance system, facial care assistance method, and facial care assistance program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007083600A1 (en) * 2006-01-17 2007-07-26 Shiseido Company, Ltd. Makeup simulation system, makeup simulation device, makeup simulation method, and makeup simulation program
JP2010179034A (en) * 2009-02-09 2010-08-19 Denso Corp Drowsiness detector, program and drowsiness detection method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201035877A (en) * 2009-03-24 2010-10-01 Altek Corp Human face scoring method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007083600A1 (en) * 2006-01-17 2007-07-26 Shiseido Company, Ltd. Makeup simulation system, makeup simulation device, makeup simulation method, and makeup simulation program
JP2010179034A (en) * 2009-02-09 2010-08-19 Denso Corp Drowsiness detector, program and drowsiness detection method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022224511A1 (en) * 2021-04-22 2022-10-27 B-by-C株式会社 Facial care assistance system, facial care assistance method, and facial care assistance program
JP2022167180A (en) * 2021-04-22 2022-11-04 B-by-C株式会社 Facial treatment support system, facial treatment support method, and facial treatment support program
JP7236754B2 (en) 2021-04-22 2023-03-10 B-by-C株式会社 BEAUTIFUL FACE ASSISTANCE SYSTEM, BEAUTIFUL FACE ASSISTANCE METHOD, AND BEAUTIFUL FACE ASSISTANCE PROGRAM

Also Published As

Publication number Publication date
JPWO2020115922A1 (en) 2021-02-15
TWI731447B (en) 2021-06-21
JP6710883B1 (en) 2020-06-17
TW202024634A (en) 2020-07-01

Similar Documents

Publication Publication Date Title
CN110326034B (en) Method for age appearance simulation
WO2019228473A1 (en) Method and apparatus for beautifying face image
KR102006019B1 (en) Method, system and non-transitory computer-readable recording medium for providing result information about a procedure
CN107708483A (en) For extracting the kinetic characteristic of user using hall effect sensor to provide a user the method and system of feedback
KR101556992B1 (en) 3d scanning system using facial plastic surgery simulation
US11315298B2 (en) Personalized stylized avatars
US20140212031A1 (en) Method and arrangement for 3-dimensional image model adaptation
CN108682050B (en) Three-dimensional model-based beautifying method and device
CN110956071B (en) Eye key point labeling and detection model training method and device
US20160203358A1 (en) Systems and methods for using curvatures to analyze facial and body features
CN107343148A (en) Image completion method, apparatus and terminal
CN107592449A (en) Three-dimension modeling method, apparatus and mobile terminal
CN107656611A (en) Somatic sensation television game implementation method and device, terminal device
CN107469355A (en) Game image creation method and device, terminal device
JP6710883B1 (en) Beauty promotion device, beauty promotion system, beauty promotion method, and beauty promotion program
US20160345887A1 (en) Moisture feeling evaluation device, moisture feeling evaluation method, and moisture feeling evaluation program
US20230337898A1 (en) Oral image processing method, oral diagnostic device for performing operation according thereto, and computer-readable storage medium in which program for performing method is stored
EP3756164B1 (en) Methods of modeling a 3d object, and related devices and computer program products
US11869213B2 (en) Electronic device for analyzing skin image and method for controlling the same
JP7442171B2 (en) Image processing system, image processing method, and image processing program
CN110852934A (en) Image processing method and apparatus, image device, and storage medium
CN107343151A (en) image processing method, device and terminal
JP6458300B2 (en) Physical information acquisition device and physical information acquisition method
JP7442172B2 (en) Image processing system, image processing method, and image processing program
JP7446000B2 (en) Beauty promotion device, beauty promotion method, and beauty promotion program

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019557510

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19892884

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19892884

Country of ref document: EP

Kind code of ref document: A1