WO2018143707A1 - Makeup evaluation system and operation method thereof - Google Patents

Makeup evaluation system and operation method thereof Download PDF

Info

Publication number
WO2018143707A1
WO2018143707A1 PCT/KR2018/001412 KR2018001412W WO2018143707A1 WO 2018143707 A1 WO2018143707 A1 WO 2018143707A1 KR 2018001412 W KR2018001412 W KR 2018001412W WO 2018143707 A1 WO2018143707 A1 WO 2018143707A1
Authority
WO
WIPO (PCT)
Prior art keywords
makeup
score
image
evaluation
area
Prior art date
Application number
PCT/KR2018/001412
Other languages
French (fr)
Korean (ko)
Inventor
김상이
권도혁
차석용
황도식
김태성
박두현
방기훈
어태준
전요한
황세원
Original Assignee
주식회사 엘지생활건강
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020170014403A external-priority patent/KR101872635B1/en
Application filed by 주식회사 엘지생활건강 filed Critical 주식회사 엘지생활건강
Priority to JP2019541436A priority Critical patent/JP7020626B2/en
Priority to CN201880008880.3A priority patent/CN110235169B/en
Priority to EP18748560.2A priority patent/EP3579176A4/en
Priority to US16/482,511 priority patent/US11113511B2/en
Priority claimed from KR1020180012931A external-priority patent/KR102066892B1/en
Publication of WO2018143707A1 publication Critical patent/WO2018143707A1/en
Priority to US17/389,108 priority patent/US11748980B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • AHUMAN NECESSITIES
    • A45HAND OR TRAVELLING ARTICLES
    • A45DHAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
    • A45D44/00Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
    • AHUMAN NECESSITIES
    • A45HAND OR TRAVELLING ARTICLES
    • A45DHAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
    • A45D44/00Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
    • A45D44/005Other cosmetic or toiletry articles, e.g. for hairdressers' rooms for selecting or displaying personal cosmetic colours or hairstyle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/17Image acquisition using hand-held instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • AHUMAN NECESSITIES
    • A45HAND OR TRAVELLING ARTICLES
    • A45DHAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
    • A45D44/00Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
    • A45D2044/007Devices for determining the condition of hair or skin or for selecting the appropriate cosmetic or hair treatment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to a makeup evaluation system and a method of operation thereof.
  • the makeup that suits the individual may be different. Therefore, the user may have difficulty in selecting a makeup that is suitable for them. The user may wonder whether the makeup is good after makeup and what parts should be supplemented.
  • Machine learning technology is a technology that extracts features from a lot of data, and when a new data is input, the computer can classify itself according to the features.
  • Deep learning technology is part of machine learning technology. Deep learning technology is based on artificial neural networks (ANNs) for constructing artificial intelligence, and refers to a technology in which computers classify data as humans distinguish objects by finding patterns in big data.
  • ANNs artificial neural networks
  • the present invention is to provide a makeup evaluation system for evaluating a user's makeup and its operation method.
  • the present invention aims to provide a makeup evaluation system and a method of operating the same, which analyzes makeup in a photo and provides a numerical value of makeup excellence.
  • the present invention seeks to provide a makeup evaluation system and a method of operation thereof for evaluating makeup through reliable score data based on the makeup expert's evaluation.
  • the present invention aims to provide a makeup evaluation system and a method of operating the same, which are constructing a database for automatically evaluating makeup through machine learning technology.
  • the present invention is to provide a makeup evaluation system and its operation method for evaluating makeup for each part of the user's face.
  • the makeup evaluation system stores a mobile terminal and a makeup score data for photographing a face image and transmitting the photographed face image to a makeup server, and when receiving the face image from the mobile terminal, A makeup server that detects one or more facial regions, calculates a makeup score for each detected facial region based on the makeup score data, and transmits the calculated makeup score to a mobile terminal, wherein the makeup server receives a makeup subject from the mobile terminal.
  • a makeup score may be calculated according to the makeup subject, and the makeup score may be calculated differently according to the shape of the detected face region and the makeup subject.
  • the makeup may be evaluated by detecting an area of the face and applying an algorithm.
  • an algorithm for detecting an area of the face.
  • each region of the face is extracted, and makeup evaluation considering the characteristics of the user may be performed by applying an algorithm to the extracted region, thereby making it possible to precisely evaluate the makeup.
  • each region of the face by extracting each region of the face and applying a different algorithm for each extracted region, it is possible to evaluate whether the makeup is good for each region, and to evaluate the overall make-up score for each region of the face. There is this.
  • the face area can be recognized more accurately by using the RGB value of the area.
  • an objective makeup evaluation may be performed regardless of an evaluation medium such as a model of a mobile terminal.
  • FIG. 1 is a block diagram showing the configuration of a makeup evaluation system according to an embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating a mobile terminal according to an exemplary embodiment of the present invention.
  • FIG. 3 is a block diagram illustrating a makeup server according to a first embodiment of the present invention.
  • FIG. 4 is a ladder diagram illustrating a method of operating a makeup evaluation system according to a first embodiment of the present invention.
  • FIG. 5 is an exemplary diagram for describing a method for receiving, by the makeup server according to the first embodiment of the present invention, image data.
  • FIG. 6 is an exemplary diagram for describing makeup score data according to a first embodiment of the present invention.
  • 7 to 8 are exemplary diagrams for describing a method of tuning makeup score data according to a first embodiment of the present invention.
  • FIG. 9 is a diagram for describing a makeup evaluation database generated according to a first embodiment of the present invention.
  • FIG. 10 is an exemplary view for explaining a screen for transmitting a makeup evaluation request signal according to a first embodiment of the present invention.
  • FIG. 11 is a diagram for describing a method of analyzing a face image by a makeup server according to the first exemplary embodiment of the present invention.
  • FIG. 12 is an exemplary diagram for describing a method of analyzing a makeup of a face image by the makeup analyzer according to the first embodiment of the present disclosure.
  • FIG. 13 is an exemplary diagram for describing a recreated makeup evaluation database according to the first embodiment of the present invention.
  • 14A to 14B are exemplary diagrams for describing a makeup score screen according to a first embodiment of the present invention.
  • 15 is a view for explaining the effect of the makeup theme according to the first embodiment of the present invention on the makeup evaluation.
  • 16 is a diagram for describing a score window for each region according to the first embodiment of the present invention.
  • 17 is a view for explaining a balance evaluation of makeup according to the first embodiment of the present invention.
  • FIG. 18 is a view for explaining a no-makeup evaluation result according to the first embodiment of the present invention.
  • FIG. 19 is a block diagram illustrating a makeup server according to a second embodiment of the present invention.
  • 20 is a ladder diagram illustrating a method of operating a makeup evaluation system according to a second embodiment of the present invention.
  • 21 to 25 are diagrams for explaining an evaluation algorithm applied to the evaluation of the eyebrow section according to the second embodiment of the present invention.
  • FIG. 26 is an exemplary diagram for describing a method of displaying an evaluation result of an eyebrow section according to a second exemplary embodiment of the present invention.
  • 27 to 28 are diagrams for explaining an evaluation algorithm applied to the evaluation of the dark circle section according to the second embodiment of the present invention.
  • FIG. 29 is a diagram illustrating a method of displaying an evaluation result of a dark circle according to a second embodiment of the present invention.
  • 30 to 34 are diagrams for explaining an evaluation algorithm applied when evaluating a color matching section according to the second embodiment of the present invention.
  • FIG. 35 is a diagram for describing a method of displaying an evaluation result of a color matching unit according to a second embodiment of the present invention.
  • FIG. 36 is a diagram illustrating an evaluation algorithm applied to a lip evaluation according to a second embodiment of the present invention.
  • FIG. 37 is a view illustrating a method of displaying an evaluation result of a lip section according to a second exemplary embodiment of the present invention.
  • 38 is a diagram illustrating an evaluation algorithm applied when evaluating a blemish section according to a second embodiment of the present invention.
  • 39 is an exemplary view for explaining a method of displaying an evaluation result of a blemish portion according to a second embodiment of the present invention.
  • FIG. 40 is an exemplary view for explaining a method of displaying a makeup evaluation result according to a second embodiment of the present invention.
  • Figure 1 is a block diagram showing the configuration of a makeup evaluation system according to an embodiment of the present invention.
  • the makeup evaluation system may include a mobile terminal 10, an application server 200, and a makeup server 100.
  • the mobile terminal 10 may request a makeup evaluation.
  • the mobile terminal 10 may request makeup evaluation in relation to at least one makeup image and display a makeup evaluation result.
  • the application server 200 is a component used for operating an application for makeup evaluation and may store information necessary for operating an application for makeup evaluation.
  • the application server 200 may transmit and receive signals and data with at least one of the mobile terminal 10 and the makeup server 100 according to the execution of the makeup evaluation application.
  • the makeup server 100 may store data required for makeup evaluation.
  • the makeup server 100 may store data for identifying each part of the face, an evaluation algorithm for evaluating makeup, and the like.
  • the makeup server 100 may evaluate makeup based on the stored data or transmit information necessary for makeup evaluation to the mobile terminal 10 or the application server 200.
  • the makeup server 100 may transmit an evaluation result signal including evaluation result information of makeup to the mobile terminal 10.
  • the mobile terminal 10, the application server 200, and the makeup server 100 may transmit and receive signals to each other.
  • the mobile terminal 10 may transmit a makeup evaluation request signal to the application server 200, and when the application server 200 receives the makeup evaluation request signal, the mobile terminal 10 transmits a makeup image corresponding to the received makeup evaluation request signal to the makeup server ( 100).
  • the makeup server 100 when the makeup server 100 receives the makeup evaluation request signal, the makeup server 100 may evaluate makeup of the received image based on the stored data, and transmit the evaluation result to the application server 200.
  • the application server 200 may transmit the evaluation result to the mobile terminal 10.
  • the application server 200 and the makeup server 100 may not be separately separated, and may transmit and receive a signal to and from the mobile terminal 10 as one server.
  • the application server 200 may be included in the makeup server 100.
  • the mobile terminal 10 may transmit a makeup evaluation request signal to the makeup server 100, and the makeup server 100 may evaluate makeup and transmit evaluation result data to the mobile terminal 10.
  • the makeup server 100 when the makeup server 100 receives the makeup evaluation request signal, the makeup server 100 transmits data related to the received makeup evaluation request signal to the mobile terminal 10, and the mobile terminal 10 based on the received data. Evaluate your makeup.
  • the mobile terminal 10 described herein includes a mobile phone, a smart phone, a computer, a notebook computer, a tablet PC, a wearable device, a digital TV, a digital It may include a display device provided in a store selling beauty related products such as signage, cosmetics, and the like, and a smart mirror provided in a home or a store.
  • FIG. 2 is a block diagram illustrating a mobile terminal according to an exemplary embodiment of the present invention.
  • the mobile terminal 10 may include a wireless communication unit 11, an input unit 12, a camera 13, a display unit 14, a memory 15, a power supply unit 16, and a control unit 17.
  • a wireless communication unit 11 an input unit 12, a camera 13, a display unit 14, a memory 15, a power supply unit 16, and a control unit 17.
  • the components shown in FIG. 2 are exemplified to help understanding of the mobile terminal according to the present invention, and the mobile terminal may have more or fewer components than those listed above. have.
  • the wireless communication unit 11 may include one or more modules that enable wireless communication between the mobile terminal 10 and another mobile terminal 10 or between the mobile terminal 100 and an external server.
  • the wireless communication unit 11 may include at least one of a broadcast receiving module, a mobile communication module, a wireless internet module, a short range communication module, and a location information module.
  • the wireless communication unit 11 may transmit / receive a signal with another mobile terminal 10 or an external server.
  • the wireless communication unit 11 may transmit and receive a signal with at least one of the application server 200 and the makeup server 100.
  • the input unit 12 may receive data or a command from a user.
  • the input unit may receive an input signal through a mechanical key, a touch key, voice recognition, or the like.
  • the data or command received through the input unit 12 may be processed as a control command and transmitted to other components.
  • the camera 13 may receive an image signal input.
  • the video signal includes still images such as photographs, moving pictures, and the like. Accordingly, the camera 13 may receive an image signal input by taking a picture, a video, or the like. For example, the camera 13 may capture a face image of the user.
  • the display unit 14 displays (outputs) information processed by the mobile terminal 10.
  • the display unit 14 may provide content to a user and display content received through the wireless communication unit 11 or input through the input unit 12 on the screen.
  • the display 14 may display screen information of an application program driven in the mobile terminal 10.
  • the display unit 14 may display an image being photographed or photographed by the camera 13.
  • the display 14 may display a face image photographed through the camera 13.
  • the display 14 may display the result of evaluating the makeup based on the photographed face image.
  • the display unit 151 may form a mutual layer structure or integrally formed with the touch sensor, thereby implementing a touch screen.
  • a touch screen may function as the input unit 12 and provide an output interface between the mobile terminal 10 and the user.
  • the memory 15 stores data supporting various functions of the mobile terminal 10.
  • the memory 15 may store a plurality of application programs or applications that are driven by the mobile terminal 10, data for operating the mobile terminal 10, and instructions. At least some of these applications may be downloaded from an external server via wireless communication. Alternatively, at least some of these application programs may exist on the mobile terminal 10 from the time of shipment for basic functions of the mobile terminal 10 (for example, a call forwarding, a calling function, a message receiving, and a calling function). .
  • At least one of these application programs may be an application for makeup evaluation.
  • the power supply unit 16 receives power from an external power source or an internal power source to supply power to each component included in the mobile terminal 10.
  • the power supply unit 190 includes a battery, and the battery may be a built-in battery or a replaceable battery.
  • the controller 17 controls the overall operation of the mobile terminal 10.
  • the controller 17 may control an operation related to each component constituting the mobile terminal 10 or an operation related to an application program.
  • the controller 17 may provide or process information or functions appropriate to a user by processing signals, data, information, or the like input or output through the above components or driving an application program stored in the memory 15.
  • the controller 17 may control at least some of the above components or operate at least two or more in combination with each other.
  • At least some of the above components described with reference to FIG. 2 may operate in cooperation with each other in order to implement an operation, control, or control method of the mobile terminal according to various embodiments described below.
  • the operation, control, or control method of the mobile terminal may be implemented on the mobile terminal by driving at least one application program stored in the memory 15.
  • FIG. 3 is a block diagram illustrating a makeup server according to a first embodiment of the present invention.
  • the makeup server 100 may include a makeup DB manager 110, a makeup evaluator 120, a controller 130, and a wireless communication unit 140.
  • the makeup DB manager 110 may be configured as a makeup subject acquirer 111, a makeup image acquirer 112, a score data generator 113, a makeup evaluation DB 114, and a makeup score learner 115. .
  • the makeup topic acquisition unit 111 determines a makeup topic through data analysis or user input. For makeup, overall balance is important. In addition, the makeup that is in vogue depends on the trend of the times. Therefore, the makeup subject obtaining unit 111 may obtain a makeup subject by analyzing data existing online. Alternatively, the makeup subject obtaining unit 111 may obtain a makeup subject by receiving a makeup subject input from the user.
  • the makeup image acquisition unit 112 receives a plurality of makeup image data that is a basis for makeup evaluation.
  • the makeup image acquisition unit 112 may receive a plurality of makeup image data classified according to a makeup theme.
  • the makeup image acquisition unit 112 may acquire a no make-up image together with the makeup image for each subject.
  • the makeup image received through the makeup image acquisition unit 112 may be stored in the makeup evaluation database 114.
  • the score data generator 113 generates makeup score data including a makeup image and a makeup score corresponding thereto.
  • the makeup image of the makeup score data may be received through the makeup image acquisition unit 112.
  • the makeup score of the makeup score data may be formed based on the evaluation of the makeup professional.
  • the makeup score data may be generated based on input for makeup evaluation of the makeup expert.
  • the makeup score may be input by a makeup expert for each face image.
  • the makeup score may include a score calculated by the makeup server 100 itself by machine learning.
  • the score data generator 113 may tune the makeup score data to lower the error rate of the makeup evaluation system. In addition, the score data generator 113 may correct the reliability of the makeup score data in order to secure objectivity of the makeup evaluation system.
  • the makeup evaluation DB 114 stores makeup score data generated through the score data generator 113. Makeup score data can be tuned or reliability corrected.
  • the makeup evaluation DB 114 may store the score data calculated with respect to the new image together with the makeup score data generated through the score data generator 113.
  • the makeup score learning unit 115 may machine learn the makeup score calculation method using the makeup evaluation DB 114 storing the makeup score data.
  • the makeup score learning unit 115 machine learns a makeup score calculation method based on the makeup evaluation DB 114.
  • the makeup score learner 115 may machine-learn a method of calculating the makeup score, similar to a method evaluated by a real makeup expert.
  • the makeup evaluator 120 may include a face image acquirer 121, a makeup analyzer 122, and a makeup score output unit 123.
  • the face image acquirer 121 receives a face image that is a target of makeup evaluation.
  • the face image acquisition unit 121 may receive a face image that is a target of makeup evaluation through the wireless communication unit 140.
  • the makeup analyzer 122 analyzes the makeup of the face image received by the face image acquirer 121.
  • the makeup analyzer 122 may analyze makeup of each face included in the face image.
  • the makeup analyzer 122 may analyze makeup by comparing a face image with image data included in makeup score data. That is, the makeup analyzer 122 may analyze makeup through statistical values of makeup score data. A detailed method will be described later.
  • the makeup score output unit 123 calculates a makeup score of the face image based on the makeup analysis result.
  • the makeup score output unit 123 may calculate a makeup comprehensive score and a score for each facial region.
  • the controller 130 controls the overall operation of the makeup server 100.
  • the controller 130 may control operations of the makeup DB manager 110, the makeup evaluator 120, and the wireless communicator 140.
  • the wireless communication unit 140 may transmit and receive data with the outside.
  • the wireless communication unit 140 may receive image data from the mobile terminal 10 or the application server 200.
  • the wireless communication unit 140 may transmit the received image data to the makeup DB management unit 110 or to the makeup evaluation unit 120.
  • the embodiments described below may be implemented in a recording medium that may be read by a computer or a similar device using, for example, software, hardware, or a combination thereof.
  • FIG. 4 is a ladder diagram illustrating a method of operating a makeup evaluation system according to a first embodiment of the present invention.
  • the makeup subject acquirer 111 of the makeup server 100 may determine a subject of makeup (S101).
  • the makeup topic acquisition unit 111 may determine a topic of makeup through wireless communication.
  • the makeup theme acquiring unit 111 may acquire beauty-related data online. Beauty-related data may include makeup related search terms, uploaded makeup related content, sales volume of makeup products, and the like.
  • the makeup theme acquisition unit 111 may analyze the obtained beauty-related data for each makeup theme and determine a theme of makeup based on the amount of data. For example, the makeup theme acquisition unit 111 may acquire three makeup styles in the order of increasing amount of data, and determine the obtained three makeup styles as makeup subjects.
  • the makeup topic acquisition unit 111 has an effect of easily acquiring a topic of makeup that is popular in recent years.
  • the makeup topic acquisition unit 111 may determine a makeup topic according to an input signal.
  • the user may input any makeup subject into the makeup server 100.
  • the makeup subject obtaining unit 111 may obtain a makeup subject corresponding to the input data and determine the makeup subject.
  • the makeup subject acquirer 111 may determine at least one makeup subject through the first or second embodiment.
  • the makeup subject acquirer 111 may determine A makeup, B makeup, and C makeup.
  • Makeup A may be a natural makeup, makeup B a beautiful makeup, makeup C a smoky makeup, but this is merely exemplary.
  • the embodiment of determining the makeup subject described above is an example, and the makeup subject obtaining unit 111 may determine the makeup subject through another method.
  • the makeup theme acquisition unit 111 may acquire the makeup criteria when acquiring the makeup theme.
  • the makeup criterion may mean at least one feature that distinguishes makeup by makeup theme.
  • the makeup reference may be used as a guideline in the image data receiving step and the makeup analysis step.
  • the makeup image acquisition unit 112 may receive image data corresponding to the determined makeup subject (S103).
  • the makeup image acquirer 112 may receive image data corresponding to a makeup subject from the outside.
  • the makeup image acquirer 112 may receive image data corresponding to a makeup subject from an external storage medium, a mobile terminal, or an external server.
  • the makeup image acquisition unit 112 may receive a plurality of makeup image data for each makeup subject.
  • the makeup image acquisition unit 112 may receive image data corresponding to A makeup, image data corresponding to B makeup, and image data corresponding to C makeup.
  • the makeup image acquisition unit 112 may receive image data classified according to a group. That is, the makeup image acquisition unit 112 may receive image data for each makeup subject from each of the plurality of groups.
  • FIG. 5 is an exemplary view for explaining a method of receiving image data by the makeup server according to an exemplary embodiment of the present invention.
  • the image classification table 500 illustrated in FIG. 5 represents a distribution of image data received through the makeup image acquisition unit 112.
  • A1 to A5, B1 to B5, C1 to C5, and D1 to D5 mean a collection of received image data.
  • A1, A2, B1, B2, C1, C2, D1, and D2 are sets of image data corresponding to the first group, and are a set of image data received to constitute a lower right (lower, lower) of the makeup evaluation DB. to be.
  • A3, A4, B3, B4, C3, C4, D3, and D4 are sets of image data corresponding to the second group, and are sets of image data received to constitute a middle right (middle and middle injury) of the makeup evaluation DB.
  • A5, B5, C5, and D5 are sets of image data corresponding to the third group, and are sets of image data received to form a top (up) of the makeup evaluation DB.
  • the makeup image acquisition unit 112 may acquire an A makeup image, a B makeup image, a C makeup image, and a no makeup image of each of the people included in the first to third groups having different scores.
  • the A make-up image, the B make-up image, and the C make-up image are image data which are the basis of evaluation of each makeup subject.
  • the no makeup image may be image data used for the lowest point processing in the makeup analysis step.
  • the first group may represent the public, the second group may represent beauty-related people, and the third group may represent makeup professionals, but this is merely illustrative.
  • the makeup image acquirer 112 may receive makeup images other than A makeup, B makeup, and C makeup. That is, the makeup image acquisition unit 112 may acquire a makeup image having a different theme from the determined makeup theme. This is image data for 0-point processing of makeup that is off the subject or balance is broken regardless of the degree of completeness of makeup.
  • the score data generator 113 may generate makeup score data based on the received image data (S105).
  • the score data generator 113 may generate score data including the received image data and scores corresponding to the respective image data. In other words, the score data generator 113 may convert the image data into score data.
  • FIG. 6 is an exemplary diagram for describing makeup score data according to a first embodiment of the present invention.
  • the makeup score data 600 shown in FIG. 6 includes image data and a score corresponding to each image data. Scores corresponding to the image data may be broken down by region. For example, as shown in FIG. 6, each image data includes a base area, an eyebrow area, an eye area, a lip area, a blusher and a shading area. It may include a star score. However, each area is merely exemplary.
  • the makeup score data 600 illustrated in FIG. 6 shows only two image data corresponding to A makeup and a score corresponding thereto, but the score data generator 113 includes all image data received in step S103. Generate score data.
  • the score data generator 113 may generate score data according to the score input for each region corresponding to each image data. For example, the at least one makeup expert may input a score for each area of the image data to the makeup server 100. The score data generator 113 may generate makeup score data based on the score data input to the makeup server 100.
  • the make-up score data shown in FIG. 6 is represented by 5, 3.5, 4, etc., but this is merely exemplary, and the make-up score data may be divided into upper, middle, middle, middle, lower, and the like.
  • the makeup score data 600 represents makeup features by makeup subject. Specifically, the makeup score data 600 represents different scores depending on the subject of makeup, even if the same makeup image. In addition, even when the same makeup is applied, the score varies depending on the face shape, eyes, and the like included in the image. Therefore, the makeup score data 600 includes a plurality of makeup image data, and includes a score for each region distinguished according to the makeup subject.
  • the score corresponding to the makeup image data not related to the makeup subject may be calculated as 0.
  • the score data may be generated based on the evaluation of the expert. This increases the reliability of the makeup evaluation.
  • makeup-related big data can be built.
  • the score data generator 113 may tune the makeup score data (S107).
  • the score data generator 113 may tune the makeup score data to improve the reliability of the generated makeup score data.
  • the score data generating unit 113 may tune the score data to calculate the same makeup score when the plurality of face images are photographed at different photographing angles or lights. How to tune the makeup score data may be as follows.
  • the makeup image acquisition unit 112 may re-receive second image data corresponding to the first image data.
  • the second image data is image data different from the first image data, and may include a newly created makeup image and a newly photographed no makeup image. That is, the second image data may refer to image data that is the same as that of the makeup performed by the same person as the first image data, but may be recognized as different makeup by a photographing angle or lighting.
  • the score data generator 113 may tune the makeup score data such that the score calculated by the first image data and the score calculated by the second image data are the same.
  • 7 to 8 are exemplary diagrams for describing a method of tuning makeup score data according to a first embodiment of the present invention.
  • the first image data 71 and the second image data 72 are identically makeup images. However, the first image data 71 and the second image data 72 are photographed at different photographing angles.
  • the score data generator 113 may tune the makeup score data such that the score calculated by the first image data 71 and the score calculated by the second image data 72 are the same.
  • the score data generator 113 may form the first image data 71 and the second image data 72 as a group and tune the same score to be calculated.
  • the score data generator 113 may adjust the score calculated by the first image data 71 and the score calculated by the second image data 72 to be the same through image data tuning.
  • the first image data 81 is makeup image data
  • the second image data 82 is no makeup image data of the same person.
  • the score data generator 113 may tune the makeup score data such that the score calculated for the second image data 82 is the lowest point.
  • the score data generator 113 may recognize makeup of the first image data 81 and the second image data 82, respectively. When the makeup is recognized like the first image data 81, the score data generator 113 may calculate the score of the recognized makeup based on the makeup score data 600. On the other hand, if the makeup is not recognized like the second image data 82, the score data generator 113 may tune the makeup score data to calculate the lowest point.
  • the score data generator 113 tunes the makeup score data to reflect the image data, the shooting angle, the illumination at the time of shooting, the identification of the no makeup, and the like, thereby improving the quality of the makeup evaluation system. .
  • the score data generator 113 may correct the reliability of the makeup score data (S109).
  • the controller 130 may receive new image data not included in the makeup score data.
  • the score data generator 113 may determine whether the score for the received new image data is calculated without error based on the makeup score data.
  • a specific method of correcting the reliability of makeup score data will be described.
  • the score data generator 113 may calculate a makeup score corresponding to the new image, and obtain an image similar to the new image from the makeup score data.
  • the score data generator 113 may determine whether the score corresponding to the new image is calculated within the preset error rate and the score of the similar image included in the makeup score data. The score data generator 113 may correct the score of the related image data included in the makeup score data based on the determination result.
  • the score data generator 113 may obtain a first score, which is a makeup score corresponding to the new image calculated based on the makeup score data.
  • the score data generator 113 may obtain a second score that is a makeup score corresponding to the new image based on an input for makeup evaluation of the makeup expert.
  • the score data generator 113 may compare the first score and the second score acquired with respect to the same image. The score data generator 113 may determine whether the first score and the second score differ from each other by more than a predetermined range as a result of the comparison.
  • the score data generator 113 may receive feedback corresponding to the comparison result from the makeup expert when the score data generator 113 is different from the first score and the second score by more than a predetermined range.
  • the feedback may include the reason for the gap between the first score and the second score.
  • the feedback may include information for correcting the first score or the second score, image recognition information, or opinion information of the makeup expert.
  • the score data generator 113 may correct the score of the image data included in the makeup score data based on the received feedback.
  • the method for correcting the reliability of the makeup score may further include a method other than the first to second embodiment exceptions exemplified above.
  • the present invention has the effect of lowering the error rate of the makeup score data by improving the makeup score, thus improving the reliability.
  • the controller 130 may store makeup score data (S111).
  • the controller 130 may store the generated makeup score data. It is also possible to store tuned or reliability corrected makeup score data.
  • the makeup evaluation database may store makeup score data for each face image. Accordingly, the makeup evaluation database may be formed such that the makeup face image and scores corresponding to regions corresponding thereto are aligned.
  • the makeup evaluation database may store the partial images to be aligned by dividing a face region and a score. Accordingly, the makeup evaluation database may be formed to classify face areas, subdivide scores by face areas, and arrange partial images on scores of the divided face areas.
  • the makeup evaluation database described above is an example, and may be generated in a different form.
  • the makeup server 100 may evaluate a makeup image by generating a makeup evaluation database.
  • the mobile terminal 10 may transmit a face image to the application server 200 (S113), and the application server 200 may transmit a face image received from the mobile terminal 10 to the makeup server 100 ( S113).
  • the wireless communication unit 140 of the makeup server 100 may receive a face image from the application server 200.
  • the mobile terminal 10 may transmit a makeup evaluation request signal to the application server 200 through the makeup evaluation application.
  • the mobile terminal 10 may display a screen for transmitting a makeup evaluation request signal.
  • FIG. 10 is an exemplary view for explaining a screen for transmitting a makeup evaluation request signal according to a first embodiment of the present invention.
  • the display unit 14 of the mobile terminal 10 may display a makeup evaluation request screen as shown in FIG. 10.
  • the makeup evaluation request screen may include a makeup subject item 1001, a face image selection icon 1002, a face image window 1003, and an evaluation icon 1004.
  • the makeup subject item 1001 is an item for selecting a makeup subject. Makeup assessments may vary depending on makeup subjects. Accordingly, the controller 17 of the mobile terminal 10 may set a makeup subject through the makeup subject item 1001.
  • the face image selection icon 1002 is an item for selecting a face image to request makeup evaluation.
  • the display unit 14 displays a photographing screen using the camera 13. Can be controlled.
  • the camera 13 may photograph the makeup face.
  • the controller 17 may display the photographed face image in the face image window 1003.
  • the controller 17 of the mobile terminal 10 may display at least one or more still images stored in the memory 15 when a command for selecting the face image selection icon 1002 is received. .
  • the controller 17 may identify an image including a face from the still image stored in the memory 15 and display a still image including at least one face.
  • the controller 17 may receive a command for selecting one face image from at least one still image displayed on the display 14.
  • the controller 17 may display the selected face image in the face image window 1003.
  • the face image window 1003 is a window showing a face image to request makeup evaluation in advance. Any face image captured by the camera 13 or stored in the memory 15 may be displayed in the face image window 1003. The user may check whether the face to request makeup evaluation is correct through the face image window 1003.
  • the evaluation icon 1004 is an icon for executing a makeup evaluation request.
  • the controller 17 may transmit a makeup evaluation request signal to the application server 200. That is, the controller 17 may control the wireless communication unit 11 to transmit the makeup evaluation request signal including the face image to the application server 200.
  • the application server 200 may transmit a makeup evaluation request signal to the makeup server 100. Accordingly, the wireless communication unit 140 of the makeup server 100 may receive a makeup request signal including a face image from the application server 200.
  • the mobile terminal 10 may transmit a makeup evaluation request signal directly to the makeup server 100.
  • the makeup evaluation request signal may further include a makeup subject, user information, and the like in addition to the face image.
  • the makeup analyzer 122 of the makeup server 100 may analyze makeup of the received face image (S115).
  • the face image acquirer 121 may receive a face image from the wireless communication unit 140.
  • the makeup analyzer 122 may analyze the makeup of the face image received by the face image acquirer 121.
  • FIG. 11 is an exemplary diagram for describing a method of analyzing a face image by a makeup server according to the first exemplary embodiment of the present invention.
  • the makeup analyzer 122 may detect each area of the face in order to analyze makeup. Specifically, according to an embodiment of the present invention, the makeup analyzer 122 may preprocess the received face image. The makeup analyzer 122 may divide the preprocessed face image into a plurality of regions, and detect the eyes, the nose, the mouth, and the like by comparing the divided regions with previously stored face region images.
  • the makeup analyzer 122 may recognize each area of the face by using a pre-trained model.
  • the pre-trained model takes advantage of some of the existing Convolutional Neural Network (CNN) models.
  • CNN Convolutional Neural Network
  • a pre-trained model can be used to train face photography to recognize eyes, nose and mouth. By using a pre-trained model, you can alleviate the problem of overfitting that can occur when you have a small amount of data to analyze.
  • the makeup analyzer 122 may recognize the eye regions 1151 and 1152, the nose region 1153, and the mouth regions 1154 and 1155 of the face image 1100.
  • the makeup analyzer 122 may analyze the makeup evaluation area based on the recognized eye area, nose area, and mouth area. That is, the makeup analyzer 122 may analyze the base, eyebrow, eye, lip, blusher, and shading makeup of the face image 1100.
  • FIG. 12 is an exemplary diagram for describing a method of analyzing a makeup of a face image by a makeup analyzer according to a first embodiment of the present disclosure.
  • the makeup analyzer 122 may compare the face image received from the mobile terminal 10 with a plurality of images included in the makeup evaluation database 114.
  • the makeup analyzer 122 may extract at least one or more image data similar to the received face image from the makeup evaluation database 114.
  • the makeup analyzer 122 may extract image data similar to the face image received for each makeup area.
  • the makeup analyzer 122 extracts at least one image data including a base similar to the base of the face image, and extracts at least one image data including an eyebrow similar to the eyebrow of the face image. do.
  • the makeup analyzer 122 may generate a makeup analysis graph 1200 as shown in FIG. 12 by obtaining a score corresponding to the extracted image data.
  • the makeup analysis graph 1200 represents a score distribution of the extracted image data.
  • the makeup analyzer 122 may map the obtained score to the score area of the base area 1201 in relation to at least one or more image data extracted as having a similar base. Similarly, the makeup analyzer 122 may map a score obtained in relation to at least one or more image data extracted with similar eyebrows to a score area of the eyebrow area 1202. The makeup analyzer 122 may generate a makeup analysis graph 1200 by mapping scores to all of the eye area 1203, the lip area 1204, the blusher, and the shading area 1205.
  • the makeup analyzer 122 may generate the makeup analysis graph 1200 in this way and analyze the makeup of the face image. However, this is exemplary and the makeup analyzer 122 may analyze the makeup through another method.
  • the makeup score output unit 123 of the makeup server 100 may calculate the makeup score of the face image based on the makeup analysis result (S117).
  • the makeup score output unit 123 may calculate a makeup score of the face image based on the makeup analysis graph 1200.
  • the makeup score output unit 123 may obtain and calculate a score having the largest score mapped to each area in the makeup analysis graph 1200.
  • the makeup score output unit 123 scores 5 points of the base area 1201, 9 points of the eyebrow area 1202, and 3 points of the eye area 1203.
  • the score of the point and the lip area 1203 may be obtained by obtaining 4.5 points and the score of the blusher and the shading area 1205 as 8 points.
  • the makeup score learner 115 of the makeup server 100 may learn a face image and a makeup score corresponding thereto (S119).
  • the makeup score learner 115 may machine learn a face image and a makeup score corresponding thereto.
  • the makeup score learner 115 may learn a makeup score calculation method using deep learning technology.
  • Deep learning technology is part of machine learning technology, which uses artificial neural network technology that stacks and connects artificial neurons between inputs and outputs.
  • the makeup score learner 115 may machine learn the makeup score calculation method using previously stored makeup score data and the calculated makeup score.
  • the makeup score learning unit 115 may learn a face image and a makeup score corresponding thereto using the composite product neural network (CNN).
  • CNN composite product neural network
  • a convolutional neural network consists of one or several convolutional layers and a common artificial neural network layer on top of it, further utilizing weights and pooling layers. This structure allows the convolutional neural network to fully utilize the input data of the two-dimensional structure.
  • the makeup score learning unit 115 may add a feature of a newly recognized face image to the existing makeup evaluation database 114 using a multiplicative neural network to machine learn the makeup score calculation method.
  • the makeup score learner 115 may add a score of a newly calculated face image to the makeup score data generated based on an input for makeup evaluation of an existing makeup expert, and may machine learn the makeup score calculation method.
  • the makeup score learning unit 115 learns the makeup score
  • the makeup score learning unit 115 may evaluate similarly to the actual makeup expert's evaluation. Through this, there is an effect that can provide a more reliable makeup evaluation service to the user.
  • the makeup score learning unit 115 may control to store the learned makeup score in a makeup evaluation database.
  • FIG. 13 is an exemplary diagram for describing a recreated makeup evaluation database according to the first embodiment of the present invention.
  • a makeup evaluation database regenerated using the makeup evaluation database 114 according to the above-described embodiment will be described.
  • the makeup evaluation database 114 may further store the newly calculated face image score data 1302 in data 1301 based on an existing makeup expert's evaluation.
  • the controller 130 may calculate a more objective makeup score by adding the newly calculated face image score data to the makeup evaluation database 114.
  • the wireless communication unit 140 of the makeup server 100 may transmit the calculated makeup score to the application server 200 (S120), and the application server 200 may transmit the received makeup score to the mobile terminal 10 ( S121).
  • the wireless communication unit 11 of the mobile terminal 10 may receive a makeup score from the application server 200. According to an embodiment, the mobile terminal 10 may directly receive the makeup score from the makeup server 100.
  • the display unit 14 of the mobile terminal 10 may display the received makeup score.
  • FIGS. 14A to 14B and 15 to 18 are diagrams for describing makeup scores according to various embodiments of the present disclosure.
  • FIGS. 14A to 14B are exemplary diagrams for describing a makeup score screen according to an exemplary embodiment.
  • the display unit 14 may display a makeup evaluation result screen as illustrated in FIGS. 14A to 14B.
  • the makeup score screen illustrated in FIG. 14A represents a makeup score evaluated based on a makeup theme selected by the user
  • the makeup score screen illustrated in FIG. 14B represents a makeup score according to a makeup theme evaluated as the highest score.
  • the makeup evaluation result screen may include a face image window 1401, a makeup subject window 1402, a comprehensive score window 1403, an area score window 1404, and a makeup reevaluation icon 1405.
  • the face image window 1401 includes a face image analyzed by the makeup server 100. Through this, the user may reconfirm whether the face image intended to be evaluated is properly evaluated.
  • the makeup subject window 1402 represents a makeup subject on which makeup is evaluated. Makeup assessment may vary depending on makeup subject, even for the same face image. Thus, it indicates whether the user correctly selected the makeup subject.
  • the comprehensive score window 1403 represents a score in which the makeup evaluation results of the face image are combined.
  • the comprehensive score window 1403 may represent an average value of scores for each facial region. Through this, the user can check his makeup result as one index.
  • the score window 1404 for each region indicates a result of evaluating makeup by dividing a face image by regions. Through this, the user can easily know which area the makeup should be complemented with.
  • the makeup reevaluation icon 1405 is an icon for receiving makeup evaluation using a new face image.
  • the controller 17 may display a makeup evaluation request screen as shown in FIG. 10 in response to a command for selecting the makeup re-evaluation icon 1405.
  • the mobile terminal 10 may display the makeup evaluation result screen as described above and provide an evaluation service similar to the evaluation by the makeup expert.
  • the mobile terminal 10 may display a makeup evaluation result screen evaluated based on a selected makeup subject.
  • the makeup server 100 may calculate a makeup score of the face image according to the selected subject.
  • the mobile terminal 10 displays the makeup subject selected by the user in the makeup subject window 1402, and displays the score according to the makeup subject selected in the comprehensive score window 1403 and the region-specific score window 1404. Can be.
  • the mobile terminal 10 may display a makeup evaluation result screen on a makeup subject that is rated as the highest score.
  • the makeup server 100 may calculate a makeup score of at least one or more face images for each makeup subject.
  • the makeup server 100 may obtain a makeup theme representing the highest score among scores calculated for each makeup theme.
  • the makeup server 100 may transmit all makeup scores for each makeup theme to the mobile terminal 10 or transmit only a makeup theme corresponding to the highest score and the highest score to the mobile terminal 10.
  • the mobile terminal 10 displays the makeup subject evaluated as the highest score in the makeup subject window 1402, and displays the makeup subject evaluated as the highest score in the comprehensive score window 1402 and the region-specific score window 1404.
  • the score can be displayed.
  • the mobile terminal 10 may simultaneously display the score according to the makeup theme selected by the user and the score according to the makeup theme evaluated as the highest score.
  • FIG. 15 is a view for explaining the influence of the makeup subject on the makeup evaluation according to the first embodiment of the present invention.
  • the display unit 14 of the mobile terminal 10 may display a makeup evaluation result screen.
  • the makeup evaluation result screen of FIG. 15 is a makeup evaluation result screen when the same face image is targeted but different makeup subjects are compared with FIGS. 14A to 14B. That is, compared to FIGS. 14A-14B, the face image window 1501 includes the same face image, but the makeup theme song 1502 is different.
  • the makeup evaluation results are different. That is, it can be seen that the composite score window 1503 and the region-specific score window 1404 show different scores compared to those of FIGS. 14A to 14B.
  • FIG. 16 is a view for explaining a score window for each region according to the first embodiment of the present invention.
  • An area score window 1604 shows a score for each area based on the makeup subject. Accordingly, the area-specific score window 1604 may represent different scores according to the makeup area. In particular, with respect to any one facial area, there may be an area that is zeroed if it is determined that the makeup is completely different from the makeup subject.
  • FIG. 17 illustrates balance evaluation of makeup according to the first embodiment of the present invention.
  • the comprehensive score 1703 represents 0 points, but each area of the score window 1704 for each area is not 0. This may indicate that makeup balance of the entire face is not correct. That is, the makeup evaluation system may not only evaluate makeup for each facial region but also evaluate overall makeup balance of the face.
  • FIG. 18 is a view for explaining a no-makeup evaluation result according to the first embodiment of the present invention.
  • both the comprehensive score window 1803 and the score window 1804 for each region represent the lowest points. This is a case where the face of the face image window 1801 is determined to be no makeup. In this way, by calculating the lowest point with a score corresponding to the face of no makeup, the reliability of the makeup evaluation system can be improved.
  • FIG. 19 is a block diagram illustrating a makeup server according to a second exemplary embodiment of the present invention.
  • the makeup server 100 may include a makeup DB manager 110, a makeup evaluator 120, a controller 130, and a wireless communication unit 140.
  • the makeup DB manager 110 may store various data related to the makeup evaluation.
  • the makeup DB manager 110 may store at least one evaluation algorithm applied to the face area to evaluate makeup.
  • the face area is a face area included in the image and may mean a face area detected by the area detector 124 to be described later.
  • the face area may include an area including the entire face and each area of the face constituting the face.
  • the face area may include at least one or more of an entire face area, an eyebrow area, an eye area, a nose area, a ball area, an image area, a chin area, and a lip area.
  • the evaluation algorithm may be an algorithm that uses RGB values of at least one pixel constituting the face area included in the image. That is, the image may be an image represented by an RGB value, and the evaluation algorithm may be an algorithm using RGB values of pixels constituting the face area.
  • the evaluation algorithm may be an algorithm that converts RGB values of at least one or more pixels constituting the face region included in the image into Lab color values and uses the converted Lab color values.
  • the evaluation algorithm may be an algorithm for converting an image represented by an RGB value into a Lab color space and evaluating makeup through Lab color values. Since the lab color value does not vary depending on the output medium, when the evaluation algorithm using the lab color value is applied, consistent evaluation is possible regardless of the output medium, thereby ensuring the reliability of makeup evaluation.
  • the makeup DB management unit 110 may store a score table (see FIG. 34) used for makeup evaluation.
  • the score table may include a plurality of first sample colors and a plurality of second sample colors.
  • the first sample color may be a sample color representing a skin color
  • the second sample color may be a sample color representing a lip color, a blusher color, or an eye shadow color.
  • Each of the plurality of first sample colors and each of the plurality of second sample colors may be dope, and score data may be mapped to the pair of first sample colors and the second sample colors. That is, the score table may be composed of score data mapped to any one of the plurality of first sample colors and one of the plurality of second sample colors.
  • the score table can be used when evaluating the color harmony of the makeup.
  • the makeup analyzer 122 may detect the first color and the second color in the face region of the user, and evaluate the color harmony of the makeup based on the first color and the second color.
  • the makeup analyzer 122 searches for the same color as the first color in the plurality of first sample colors, searches for the same color as the second color in the plurality of second sample colors, and applies the searched pair to the colors.
  • the color harmonization can be evaluated by obtaining a score that is mapped.
  • the score table may be a table generated based on an input for makeup evaluation of a makeup expert. That is, the score table may be a table in which makeup experts input scores for color combinations in advance.
  • the makeup analysis unit 122 to be described later evaluates makeup using a table generated based on an input for makeup evaluation of a makeup expert, there is an advantage that a makeup evaluation result based on professionalism may be provided to the user. Accordingly, the reliability of the makeup evaluation system can be increased.
  • the makeup evaluator 120 may include an area detector 124 and a makeup analyzer 122.
  • the area detector 124 may acquire a face image included in a picture or a video.
  • the area detector 124 may receive a picture or a video through the wireless communication unit 140, and detect a face image, which is a subject of makeup evaluation, from the picture or the video.
  • the area detector 124 may detect each area of the face in the face image. For example, the area detector 124 may detect at least one or more of an eyebrow area, an eye area, a nose area, a ball area, a mouth area, and a jaw area.
  • the area detector 124 may detect a face and each part of the face through a face recognition algorithm.
  • the area detection unit 124 has an advantage of recognizing a face and a part of a face more accurately through deep learning technology during face recognition.
  • the makeup analyzer 122 analyzes the makeup of the face area and each face area acquired by the area detector 124.
  • the makeup analyzer 122 may analyze the makeup of the face area and each face area based on a score table and an evaluation algorithm stored in the makeup DB manager 110. A detailed method will be described later.
  • the makeup analyzer 122 calculates a makeup score of the face image based on the makeup analysis result. For example, the makeup analyzer 122 may calculate a makeup comprehensive score and a score for each facial region.
  • the controller 130 controls the overall operation of the makeup server 100.
  • the controller 130 may control operations of the makeup DB manager 110, the makeup evaluator 120, and the wireless communicator 140.
  • the wireless communication unit 140 may transmit and receive data with the outside.
  • the wireless communication unit 140 may receive image data from the mobile terminal 10 or the application server 200.
  • the wireless communication unit 140 may transfer the received image data to the makeup DB manager 110 or the makeup evaluator 120.
  • the embodiments described below may be implemented in a recording medium that may be read by a computer or a similar device using, for example, software, hardware, or a combination thereof.
  • FIG. 20 is a ladder diagram illustrating a method of operating a makeup evaluation system according to a second embodiment of the present invention.
  • the mobile terminal 10 transmits and receives a signal to and from the makeup server 100, and the application server 200 described in FIG. 1 is included in the makeup server 100.
  • the controller 17 of the mobile terminal 10 may acquire an image (S11).
  • the controller 17 may acquire an image through the wireless communication unit 11 or the camera 13.
  • the wireless communication unit 11 may receive an image from the outside.
  • the camera 13 may acquire an image by taking a picture or a video.
  • the user may transmit or input the makeup face image to the mobile terminal 10 to evaluate the makeup.
  • the user may transmit an image stored externally to the mobile terminal 10 or take a face photographed by the camera 13 of the mobile terminal 10.
  • the controller 17 of the mobile terminal 10 may receive a makeup evaluation request command (S11).
  • the controller 17 may receive a makeup evaluation request command through the input unit 12.
  • the input unit 12 may receive a makeup evaluation request command after receiving a command for selecting at least one image.
  • the user may select an image to receive makeup evaluation and input a makeup evaluation request to the input unit 12.
  • the controller 17 may further receive a makeup subject selection command through the input unit 12 when receiving the makeup evaluation request command.
  • Makeup subjects may include natural, beautiful or smokey, and the like.
  • the makeup evaluation may differ depending on the makeup subject. Therefore, for more accurate makeup evaluation, the controller 17 may receive a command for selecting a makeup subject when receiving a makeup evaluation request command.
  • the controller 17 of the mobile terminal 10 may transmit a makeup evaluation request signal to the makeup server 100 (S15).
  • the controller 17 may control to transmit the makeup evaluation request signal to the makeup server 100 through the wireless communication unit 11.
  • the makeup evaluation request signal may include image data. That is, the controller 17 may transmit the makeup evaluation request signal including the image data corresponding to the image acquired in step S11 to the makeup server 100.
  • the makeup evaluation request signal may be a signal for requesting makeup evaluation corresponding to a face included in an image according to the image data.
  • the wireless communication unit 140 of the makeup server 100 may receive a makeup evaluation request signal.
  • the controller 130 of the makeup server 100 may analyze image data included in the makeup evaluation request signal.
  • the image data included in the makeup evaluation request signal may be data modulated for image transmission.
  • the controller 130 may restore the image data included in the makeup evaluation request signal to the image.
  • the controller 130 of the makeup server 100 may detect a predetermined area in the image received through the makeup evaluation request signal (S17).
  • the controller 130 may preset at least one or more evaluation sections that are the subject of makeup evaluation.
  • the controller 130 may detect at least one or more regions corresponding to the evaluation section in the image.
  • the controller 130 may determine at least one of eyebrow, dark circle, hue harmony, lip, and blemish parts of the makeup evaluation. It may be set as the target evaluation section.
  • the evaluation sections listed above are merely examples for convenience of explanation and need not be limited thereto.
  • the controller 130 evaluates the eyebrow area for evaluating the eyebrow section, the dark circle area for evaluating the dark circle section, the color harmony area for evaluating the color harmony section, the lip area for evaluating the lip section, and the blemish section. At least one or more areas of the blemish area may be detected in the received image.
  • each area such as an eyebrow area, a dark circle area, etc. is not limited to the corresponding area, and may include at least one or more areas according to the evaluation algorithm for each area.
  • the eyebrow area is not limited to the eyebrow area and may include an eyebrow area, a nose area, and the like. This is to evaluate the eyebrow area, not only to evaluate the shape, color, etc. of the eyebrows, but also to consider the balance between the eyebrow area and the entire face.
  • the detection part corresponding to each area will be described in detail through an evaluation algorithm for each area which will be described later.
  • the controller 130 may apply the evaluation algorithm for each region to the detected at least one region (S19).
  • the controller 130 may apply different evaluation algorithms according to the detected areas. For example, the controller 130 may apply the first evaluation algorithm to the eyebrow area and the second evaluation algorithm to the dark circle area.
  • the same evaluation algorithm is not applied to the plurality of detected areas to evaluate the makeup consistently, but the different evaluation algorithms are applied to the detected areas to make the makeup more accurate. have.
  • FIGS. 21 to 25 are diagrams for describing an evaluation algorithm applied to the evaluation of an eyebrow section according to a second embodiment of the present invention
  • FIG. 26 illustrates an evaluation result of the eyebrow section according to the second embodiment of the present invention. It is an exemplary figure for demonstrating the method of displaying.
  • the makeup DB manager 110 of the makeup server 100 may store an evaluation algorithm for evaluating an eyebrow section.
  • the controller 130 may detect the eyebrow area through the area detector 124 in step S17, and apply an evaluation algorithm to the eyebrow area detected through the makeup analyzer 122.
  • the evaluation algorithm for evaluating the eyebrow section may include a plurality of algorithms, and each algorithm may evaluate the eyebrow section differently.
  • the evaluation algorithm for evaluating the eyebrow section may include an algorithm for evaluating eyebrow length, an algorithm for evaluating horizontal degree, an algorithm for evaluating eyebrow length, and an algorithm for evaluating uniformity of eyebrow color. .
  • the makeup analyzer 122 may analyze and evaluate all the uniformity of the eyebrow length, the horizontal degree, the eyebrow length, and the eyebrow color.
  • the area detector 124 may detect the first point 501, which is the outer end of one eye, and the second point 502, which is the outer end of the nose, in the image. At this time, the area detector 124 may detect the right end of the nose when detecting the outer end point of the right eye, and may detect the left end of the nose that detects the outer end point of the left eye.
  • the makeup analyzer 122 may acquire a straight line 510 connecting the first point 501 and the second point 502.
  • the makeup analyzer 122 may detect the third point 503, which is an outer end of the eyebrow, and calculate a distance d1 between the third point 503 and the straight line 510.
  • the makeup analyzer 122 may determine the appropriateness of the eyebrow length based on the calculated distance d1. For example, the makeup analyzer 122 determines the eyebrow length as 'short' when the calculated distance d1 is less than or equal to the first distance, and when the calculated distance d1 is longer than or equal to the first distance and less than or equal to the second distance. The eyebrow length may be determined as 'titration', and when the calculated distance d1 is longer than the second distance, the eyebrow length may be determined as 'long'.
  • a determination method is merely exemplary and need not be limited thereto.
  • the area detector 124 may detect a first point 601, which is an inner end of the eyebrow, and a second point 602, which is an outer end of the same eyebrow, based on any one eyebrow in the image.
  • the area detector 124 may acquire a straight line 610 connecting the first point 601 and the second point 602 and calculate an angle ⁇ between the straight line and the horizontal line 620.
  • the makeup analyzer 122 may determine the appropriateness of the horizontal degree of the eyebrows based on the calculated angle ⁇ .
  • the makeup analyzer 122 determines the horizontal degree of the eyebrows as 'linear', and the calculated angle ⁇ is greater than the first angle beam and the second angle.
  • the horizontal degree of the eyebrows may be determined as the 'normal type', and when the calculated angle ⁇ is greater than the second angle, the horizontal degree of the eyebrows may be determined as the 'arch type'.
  • such a determination method is merely exemplary and need not be limited thereto.
  • the makeup analysis unit 122 may determine the eyebrow type according to the horizontal degree of the eyebrows such as 'straight', 'normal' or 'arch', and calculate the score of the eyebrow shape according to each type determined.
  • the eyebrow shape data according to the eyebrow type may be stored in advance, and the makeup analyzer 122 compares the eyebrow shape data according to the determined eyebrow type with the data of the eyebrow shape obtained from the image, and compares them. As a result, the smaller the difference between the stored data and the acquired data in the image, the higher the eyebrow score, the score may be determined.
  • the area detector 124 may detect the first point 701, which is the inner? P of the eyebrow, and the second point 702, which is the outer end of the nose, based on any one eyebrow in the image. At this time, the area detector 124 may detect the right end of the nose when detecting the inner end of the right eyebrow, and may detect the left end of the nose that detects the inner end of the left eyebrow.
  • the makeup analyzer 122 may acquire a straight line 710 passing through the second point 702 in the vertical direction, and obtain a distance d2 between the straight line 710 and the first point 701.
  • the makeup analyzer 122 may determine the adequacy of the eyebrow length based on the calculated distance d2. For example, when the calculated distance d2 is less than or equal to the first distance, the makeup analyzer 122 determines the eyebrow length as 'short', and the calculated distance d2 is longer than or equal to the first distance and less than or equal to the second distance. In this case, the front eyebrow length may be determined as 'titration', and when the calculated distance d2 is longer than the second distance, the front eyebrow length may be determined as 'long'.
  • a determination method is merely exemplary and need not be limited thereto.
  • the controller 130 describes a method of evaluating the uniformity of the eyebrow color of the eyebrow area.
  • the uniformity of the eyebrow color may be an item indicating whether the color of the eyebrows is evenly makeup.
  • the controller 130 may perform an eyebrow determination operation to determine the uniformity of the eyebrow color.
  • the area detector 124 may detect the first to fifth points 801 to 805 in the eyebrows. Specifically, the area detector 124 detects the first point 801, which is the outer end of the eyebrow, and the second point 802, which is the inner end of the eyebrow, and the first point 801 and the second point 802 of the eyebrows. The third point 803, which is the center of the eyebrows, and the fourth point 804, the second point 802, and the third point 804, which is the center of the first point 801 and the third point 803 of the eyebrows. A fifth point 805 which is the center of 803 may be detected.
  • the makeup analyzer 122 may acquire a curve 810 connecting the first to fifth points 801 to 805.
  • the makeup analyzer 122 may acquire a vertical line 820 at each point of the curve along the curve 810, and extract a value of an image from the vertical line 820.
  • the value of the image may include an RGB value
  • the vertical line 820 may be a straight line having a predetermined length.
  • the makeup analyzer 122 may acquire a maximum value of the RGB values extracted along the vertical line 820, and determine a point having the maximum value and an image value within a predetermined ratio from the maximum value as an eyebrow. For example, the makeup analyzer 122 acquires the maximum value 126 when the value of the extracted image is 40, 41, 120, 122, 126, 43, 40, and 120 and 20 which is a value of the image within 20% from 126. The point of 122 people and the point of 126 can be judged by eyebrows.
  • the makeup analyzer 122 may perform the eyebrow determination operation described above on the gray channel, the red channel, and the blue channel of the image, respectively.
  • the makeup analyzer 122 may determine the first area 901 as the eyebrow as illustrated in FIG. 25 (b) by performing the eyebrow determination operation in the gray channel. .
  • the makeup analyzer 122 may determine the second area 902 as the eyebrow as illustrated in FIG. 25C by performing the eyebrow determination operation in the red channel.
  • the makeup analyzer 122 may measure the similarity between the first region 901 and the second region 902.
  • the makeup analyzer 122 may measure the degree of similarity through an area of an overlapping area between the first area 901 and the second area 902. That is, the makeup analyzer 122 may calculate the similarity as the width of the overlapping areas of the first region 901 and the second region 902 is wider, and calculate the similarity as the width of the overlapping areas is narrower. .
  • the makeup analyzer 122 may determine the eyebrow uniformity based on the calculated similarity. For example, the makeup analyzer 122 determines eyebrow uniformity as 'nonuniformity' when the calculated similarity is less than or equal to the first reference value, and determines eyebrow uniformity as 'uniformity' when the calculated similarity exceeds the second reference value. can do. However, such a determination method is merely exemplary and need not be limited thereto.
  • the makeup analyzer 122 performs the eyebrow determination operation on the blue channel image, as shown in FIG. 25 (d).
  • the three regions 903 can be determined as eyebrows.
  • the makeup analyzer 122 may determine the eyebrow uniformity as described above by calculating the similarity between the first region 901 and the third region 903.
  • the wireless communication unit 140 may transmit the evaluation result signal to the mobile terminal 10 after evaluating the makeup, and the display unit 14 of the mobile terminal 10 may display the evaluation result.
  • FIG. 26 may be an exemplary diagram illustrating a result of evaluating makeup for an eyebrow section.
  • the display unit 14 may display an evaluation result of the eyebrow length, the horizontal degree, the eyebrow front length, and the eyebrow uniformity.
  • the method of showing the evaluation result shown in FIG. 26 is merely exemplary.
  • FIGS. 27 to 28 are diagrams for explaining an evaluation algorithm applied to the evaluation of the dark circle section according to the second embodiment of the present invention
  • FIG. 29 is an evaluation result of the dark circle section according to the embodiment of the present invention. Exemplary drawing for explaining a method of displaying.
  • the area detector 124 may detect a plurality of points in the eye area.
  • the area detector 124 may detect the first to fourth points 1101 to 1104 in the eye area, the first point 1101 is an outer end point of the eye, and the second point 1102. ) May be an inner end point of the eye, the third point 1103 may be an upper end point of the eye, and the fourth point 1104 may be a lower end point of the eye.
  • the makeup analyzer 121 calculates a horizontal distance l1 of the eye by measuring a straight line connecting the first and second points 1101 and 1102 and connects the third and fourth points 1103 and 1104.
  • the vertical distance l2 of the eye may be calculated by measuring a straight line distance.
  • the makeup analyzer 121 may acquire the reference line 1110 based on the first to fourth points 1101 to 1104, the horizontal distance l1, and the vertical distance l2. For example, the makeup analyzer 121 acquires a reference line 1110 having a length corresponding to a predetermined ratio of the horizontal distance l1 at a position spaced downward from the third point 1103 by a vertical distance l2. can do.
  • the length of the reference line 1110 may be 80% of the horizontal distance l1, but this is merely exemplary.
  • the makeup analyzer 121 extracts the maximum value of the RGB values of the left third region 1111 of the reference line 1110, and extracts the maximum value from the right third region 1112 of the reference line 1110.
  • the maximum value of the RGB values may be extracted, and the minimum value of the RGB values of the center 1/2 region 1113 of the reference line 1110 may be extracted.
  • the makeup analyzer 121 may acquire a small value among the two extracted maximum values and calculate a difference between the obtained small value and the minimum value extracted previously.
  • the makeup analyzer 121 may evaluate the darkness of the dark circle based on the calculated difference.
  • the dark circle target region can be detected through the distance between the first to fourth points 1101 to 1104 and the eye, and the peripheral skin color is obtained through the RGB values of both sides in the dark circle target region,
  • the RGB value of By obtaining the color of the darkest part under the eyes through the RGB value of, there is an advantage that can measure the darkness of the dark circle more precisely. In other words, rather than simply measuring the dark circle, there is an advantage that can be evaluated whether the make-up is covered so that the dark circle is similar to the surrounding skin color.
  • FIGS. 30 to 34 are diagrams for describing an evaluation algorithm applied when evaluating the color matching section according to the second embodiment of the present invention
  • FIG. 35 is a view illustrating the color matching section according to the second embodiment of the present invention. It is an example figure for demonstrating the method of displaying an evaluation result.
  • the controller 130 may control to extract the skin color.
  • the area detector 124 may detect the nose area 3001 from the face included in the image.
  • the area detector 124 may detect a nasal region.
  • the makeup analyzer 122 may extract a plurality of RGB color values corresponding to the plurality of points included in the nose area 3001 and calculate an average value of the extracted RGB color values.
  • the makeup analyzer 122 may extract a color corresponding to the calculated RGB average value as the skin color.
  • the makeup analyzer 122 may distribute the extracted skin color in the Lab color space, detect a color closest to the extracted skin color, and determine the detected color as the skin color. As shown in FIG. 30 (b), the makeup DB manager 110 stores a plurality of representative skin colors, and the makeup analyzer 122 may acquire a color closest to the detected skin color from the stored representative colors. And the acquired color can be determined as the skin color. For example, in FIG. 30, the makeup analyzer 122 may determine s5 as the skin color.
  • the controller 130 may control to extract the lip color.
  • the area detector 124 may detect the lip area 3101 on the face included in the image.
  • the area detector 124 may detect the lower lip area.
  • the makeup analyzer 122 may extract a plurality of RGB color values corresponding to the plurality of points included in the lip area 3101 and calculate an average of the extracted RGB color values.
  • the makeup analyzer 122 may extract a color corresponding to the calculated RGB average value as the lip color.
  • the makeup analyzer 122 may detect the color closest to the extracted lip color by distributing the extracted lip color in the Lab color space, and determine the detected color as the lip color. As shown in FIG. 31B, the makeup DB manager 110 stores a plurality of representative lip colors, and the makeup analyzer 122 selects a color closest to the detected lip color among the plurality of representative lip colors. It may be obtained, and the obtained color may be determined as the lip color. For example, in FIG. 31, the makeup analyzer 122 may determine l7 as a lip color.
  • the controller 130 may control to extract the bluish color.
  • the area detector 124 may detect the area 3201 viewed from the face included in the image.
  • the ball area 3201 may include both a left ball area and a right ball area.
  • the makeup analyzer 122 may perform an obstacle removal operation when extracting a bluish color.
  • the obstacle removing operation may be an operation for minimizing the case where the ball region is covered by the hair and the like, and the bluish color is incorrectly determined.
  • the makeup analyzer 122 may remove an area whose image value is smaller than a predetermined reference value after converting the image to a gray image when performing the obstacle removing operation.
  • the predetermined reference value may be 0.35. It is not so limited to just everything.
  • the makeup analyzer 122 may extract a plurality of RGB color values corresponding to the remaining areas except the removed area of the ball area 3201, and calculate an average value of the extracted RGB color values.
  • the makeup analyzer 122 may extract a color corresponding to the calculated RGB average value as a bluish color.
  • the makeup analyzer 122 may detect the color closest to the extracted bluish color by distributing the extracted blusher color in the Lab color space, and determine the detected color as the bluish color. As shown in FIG. 32 (b), the makeup DB manager 110 stores a plurality of representative blusher colors, and the makeup analyzer 122 selects a color closest to the detected blusher color among the plurality of representative blusher colors. It may acquire, and determine the obtained color as a bluish color. For example, in FIG. 32, the makeup analyzer 122 may determine b8 as a bluish color.
  • the makeup analysis unit 122 may determine a large value of a in the Lab color space among the ball region 3201 as the representative bluish color.
  • controller 130 may control to extract the eye shadow color.
  • the area detector 124 may detect an area 3301 above the eye from the face included in the image.
  • the upper eye area 3301 may include both an upper left eye area and an upper right eye area.
  • the makeup analyzer 122 may perform the obstacle removing operation as described above.
  • the makeup analyzer 122 may extract a plurality of RGB color values corresponding to the remaining areas except for the region removed through the obstacle removing operation in the upper area 3301, and calculate an average of the extracted RGB color values. .
  • the makeup analyzer 122 may extract a color corresponding to the calculated RGB average value as an eye shadow color.
  • the makeup analyzer 122 may extract the left eye shadow color and the right eye shadow color, respectively.
  • the makeup analyzer 122 may determine the representative shadow color based on the extracted eye shadow color.
  • the makeup analyzer 122 may determine the representative shadow color in different ways according to the makeup theme.
  • the makeup analyzer 122 may determine, as a representative shadow color, a value having a larger value in Lab color space among the left eye shadow color and the right eye shadow color extracted.
  • the makeup analyzer 122 may determine, as a representative shadow color, a value having a small L value in a lab color space among the left eye shadow color and the right eye shadow color extracted when the makeup subject is smokey. This is because the recommended eye shadow color is different according to the makeup theme, and by determining the representative shadow color in different ways according to the makeup theme, there is an advantage that can be evaluated whether the makeup is well suited to the theme.
  • the makeup DB manager 110 includes a plurality of first sample colors and a plurality of second sample colors, and a pair of sample colors configured by any one of the plurality of first sample colors and the plurality of second sample colors.
  • the score table to which the score data is mapped may be stored.
  • the makeup DB manager 110 may map a plurality of skin colors and a plurality of lip colors and store a skin-lip score table indicating a score corresponding thereto.
  • the makeup analyzer 122 may search the determined skin color and the determined lip color in the skin-lip score table, obtain a score mapped to the searched skin color and the lip color, and determine the obtained score as the skin & lips harmony score. Can be.
  • the makeup DB manager 110 may store a skin-blusher score table that maps a plurality of skin colors and a plurality of blush colors and indicates corresponding scores.
  • the makeup analyzer 122 may search the determined skin color and the determined blush color in the skin-blusher score table, obtain a score mapped to the searched skin color and the blush color, and determine the obtained score as the skin & blusher harmony score. Can be.
  • the makeup analyzer 122 may calculate a difference between the representative shadow color and the skin color determined by the method described with reference to FIG. 33.
  • the makeup analyzer 122 may calculate a difference between the a value of the representative shadow color and the skin color and a difference between the L value, and determine the score based on the calculated difference between the a value and the L value. have.
  • the makeup analyzer 122 may determine the score differently according to the makeup subject.
  • the makeup analyzer 122 may determine the color matching analysis with the skin in the case of the eye shadow differently from the case of the lip color and the blusher. This is because the blusher or the lips tend to have a similar color series even though the makeup subjects are different, but the eye shadow may be completely different according to the makeup subject. Accordingly, when judging the color harmony section of the makeup, there is an advantage that the color harmony can be more accurately evaluated.
  • the makeup analysis unit 122 extracts at least one or more of the skin color, lip color, blush color and shadow color, it may be determined as a zero point in the case of the skin color is not extracted.
  • the wireless communication unit 140 may transmit the evaluation result signal to the mobile terminal 10 after evaluating the color matching section, and the display unit 14 of the mobile terminal 10 may display the evaluation result.
  • FIG. 30 may be an exemplary diagram illustrating a makeup evaluation result for a color matching unit.
  • the display unit 14 may display evaluation results for skin & lip harmony, skin & blusher harmony and skin & eye shadow harmony.
  • the method of showing the evaluation result shown in FIG. 30 is merely exemplary.
  • FIG. 36 is a diagram for describing an evaluation algorithm applied to the evaluation of the lips section according to the second embodiment of the present invention.
  • FIG. 37 is a view illustrating an evaluation result of the lips section according to the second embodiment of the present invention. Exemplary drawing for explaining the method.
  • the area detector 124 may detect the lip area 2001 from the face included in the image.
  • the detected lip region may be as shown in FIG. 36 (a).
  • the makeup evaluator 120 may evaluate lip uniformity and lip dryness in relation to the makeup of the lip region 2001.
  • the lip uniformity may indicate a uniformity of the lip color and may be an item indicating whether makeup is evenly performed on the lips.
  • Lip dryness may be an item indicating whether makeup is well performed while the lips are moist.
  • the makeup evaluation unit 120 will be described how to evaluate the uniformity of the lips.
  • the makeup analyzer 122 may convert the detected lip region 2001 into a lab color space, and acquire an image of the reflection region of the lip region by setting a threshold value in the image in the L space.
  • the makeup analyzer 122 may detect a region composed of pixels in which an L value is included in a preset range in an image in the L space, and determine the detected region as a reflection region.
  • 36 (b) may be an exemplary diagram illustrating a reflection area determined by the makeup analyzer 122. It is shown that the size of the reflective area 2002 is the largest in step 1 and the size of the reflective area 2002 decreases toward step 5.
  • the makeup analyzer 122 may calculate the size of the lip area in the image as shown in FIG. 36 (a) and calculate the size of the reflection area detected in the image as shown in FIG. 36 (b). .
  • the makeup analyzer 122 may calculate a ratio of the size of the reflective area to the size of the lip area.
  • the makeup analyzer 122 may evaluate the uniformity of the lips based on the ratio of the calculated size. That is, the makeup analyzer 122 may evaluate the uniformity of the lips in the case of the lips shown in step 1, and may evaluate the uniformity of the lips toward the step 5.
  • the makeup evaluator 120 may evaluate a lip dryness.
  • the makeup analyzer 122 may detect a lip region and acquire an lip region image 2001 as illustrated in FIG. 36A.
  • the makeup analyzer 122 may acquire a filtering image in which a high pass filter is applied to the acquired lip region image.
  • the makeup analyzer 122 may acquire a mask image representing the vertical wrinkles of the lip region by setting a threshold value in the image in the R space among the filtered images. That is, the makeup analyzer 122 may detect a region composed of a pill cell having an R range among the lip regions, and determine the wrinkle region 2002 indicating the detected region.
  • the wrinkled area 2002 may be obtained in stages, similar to the example shown in FIG. 36 (b).
  • the makeup analyzer 122 may calculate the size of the lip area in the image as shown in FIG. 36 (a) and calculate the size of the wrinkle area 2002 in the image as shown in FIG. 36 (b). have.
  • the makeup analyzer 122 may calculate a ratio of the size of the wrinkle area to the size of the lip area.
  • the makeup analyzer 122 may evaluate the dryness of the lips based on the ratio of the calculated size. That is, the makeup analysis unit 122 calculates a large ratio of the size of the lips in the case of the lips shown in step 1, and evaluates the dryness of the lips high. Can be evaluated
  • the reflective region 2002 and the wrinkled region 2002 are described as if they are the same. However, this is merely an example for convenience of description, and the reflective region 2002 and the wrinkled region 2002 in one image are illustrated. Can be detected differently, so that the lip uniformity and dryness can be evaluated differently.
  • the wireless communication unit 140 may evaluate the lip section and then transmit the evaluation result signal to the mobile terminal 10, and the display unit 14 of the mobile terminal 10 may display the evaluation result.
  • FIG. 37 may be an exemplary diagram illustrating a result of makeup evaluation on a lip section.
  • the display unit 14 may display an evaluation result of the lip uniformity.
  • the display unit 14 may display a score corresponding to lip uniformity and a score corresponding to lip dryness, respectively.
  • the method of showing the evaluation result shown in FIG. 37 is merely exemplary.
  • FIG. 38 is a view for explaining an evaluation algorithm applied when evaluating a blemish section according to a second embodiment of the present invention
  • FIG. 3 shows an evaluation result of the blemish section according to a second embodiment of the present invention. It is an example figure for demonstrating the method.
  • the image detector 121 may acquire an image and detect a face region included in the acquired image. For example, the image detector 121 may detect a face region as shown in FIG. 38 (a).
  • the image detector 121 may detect the ball area 2201 in the detected face area, and the ball area 2201 may include a left ball area and a right ball area.
  • the image detector 121 may detect the ball area 2201 in the face area as shown in FIG. 38 (b).
  • the image detector 121 may detect the jaw region 2202 in the detected face region.
  • the makeup analyzer 122 may convert all pixels included in the jaw region 2202 into a lab space and calculate average values of L, a, and b in the converted lab space to calculate an average Lab value of the skin. .
  • the makeup analyzer 122 may calculate an RGB average value of the skin by converting an average Lab value into an RGB value.
  • the makeup analyzer 122 may set the blemish target area.
  • the makeup analyzer 122 may convert an image including a face region into a lab color space, and acquire a gap image corresponding to a difference between an average Lab value of the transformed Lab and the skin previously calculated.
  • the makeup analyzer 122 acquires a left and right gap image corresponding to a difference between pixels positioned next to each other (left or right) based on each of the plurality of pixels included in the gap image, and the plurality of pixels included in the gap image.
  • An upper and lower gap image corresponding to a difference between pixels positioned above or below each other may be obtained based on each reference.
  • the makeup analyzer 122 may acquire a color difference image corresponding to a sum of squares of pixel values of the left and right gap images and squares of pixel values of the upper and lower gap images.
  • the makeup analyzer 122 may acquire a target area including points at which pixel values are larger than a preset threshold value in the color difference image.
  • the makeup analyzer 122 may set the clustered region as the blemish target region 2203 by performing a morphological operation on the target region, as shown in FIG. 38 (d).
  • the makeup analyzer 122 may acquire heterogeneous points 2213, 2223, and 2233 in which the pixel values differ from the blemish target area 2203 by more than a preset threshold. In particular, as shown in FIG. 38E, the makeup analyzer 122 may recognize disparate points 2213 and 2223 acquired inside the ball area 2201 as blemishes.
  • the makeup analyzer 122 detects the size of the ball area 2201 and the sizes of the heterogeneous points 2213 and 2223 recognized as blemishes, respectively, and compares the sizes of the heterogeneous points 2213 and 2223 to the size of the ball area 2201.
  • the size ratio can be calculated.
  • the makeup analyzer 122 may evaluate skin uniformity based on the calculated size ratio. For example, the makeup analyzer 122 may score 5 points if the size ratio falls between the first reference value (eg, 0%) and the second reference value (eg, 5%). For example, between 5%) and the third reference value (eg, 10%), 3 points, and between the third reference value (eg, 10%) and the fourth reference value (eg, 20%). You can decide by 1 point.
  • Skin uniformity indicates whether the skin color of the face is evenly makeup, and may be an index indicating whether the makeup cover is well done so that defects such as blemishes, spots, or wrinkles are not seen.
  • the wireless communication unit 140 may evaluate the blemish section and then transmit an evaluation result signal to the mobile terminal 10, and the display unit 14 of the mobile terminal 10 may display the evaluation result.
  • the display unit 14 may display an evaluation result for the blemishes.
  • the method of showing the evaluation result shown in FIG. 39 is merely exemplary.
  • the makeup server 100 may transmit an evaluation result signal of evaluating makeup to the mobile terminal 10 (S21).
  • the total scores of the eyebrow section, dark circle section, color harmony section, lip section, and blemish section may be combined to derive makeup total scores.
  • the ratio of each detailed evaluation section to the total score may be set differently for each makeup subject. For example, if the make-up theme is natural, the total score is calculated based on 60% of the score of the dark circle, and 10% of the scores of the remaining eyebrow, color harmony, lip and blemishes. In the case of Smokey, the total score can be calculated by reflecting 35% of color matching, 40% of lip, 15% of eyebrows, 5% of dark circles, and 5% of blemishes.
  • the display unit 14 of the mobile terminal 10 may display the makeup evaluation result based on the received evaluation result signal (S23).
  • the display unit 14 may display the makeup result as shown in FIG. 40.
  • the display unit 14 may display a score and a total score for each makeup evaluation section.
  • the display 14 may display the makeup evaluation result as illustrated in FIGS. 26, 29, 35, 37, and 39.
  • the display unit 14 may display detailed results of the selected sectors in FIGS. 26, 29, 35, 37, or 39. Evaluation results as shown can be displayed.
  • the mobile terminal 10 may directly perform makeup evaluation by acquiring an image, and in this case, may receive data related to makeup evaluation from the makeup server 100.
  • the makeup evaluation system has been described above in the first embodiment according to FIGS. 3 to 18 and the second embodiment according to FIGS. 19 to 40, but this is only for convenience of description and is not limited thereto. That is, the makeup evaluation system and its operation method according to the first embodiment described with reference to FIGS. 3 to 18 and the makeup evaluation system according to the second embodiment described with reference to FIGS. 19 to 40 and the operation method thereof may be combined. Can be.
  • the makeup analyzer 122 may output makeup scores by applying the makeup score data as shown in FIG. 6 and the algorithm described with reference to FIGS. 21 to 25.
  • the display unit 14 may display a combination of the makeup result screen according to the first embodiment and the makeup result screen according to the second embodiment.
  • the present invention described above can be embodied as computer readable codes on a medium in which a program is recorded.
  • the computer-readable medium includes all kinds of recording devices in which data that can be read by a computer system is stored. Examples of computer-readable media include hard disk drives (HDDs), solid state disks (SSDs), silicon disk drives (SDDs), ROMs, RAMs, CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, and the like. There is this.
  • the computer may include a controller 180 of the mobile terminal. Accordingly, the above detailed description should not be interpreted as limiting in all aspects and should be considered as illustrative. The scope of the invention should be determined by reasonable interpretation of the appended claims, and all changes within the equivalent scope of the invention are included in the scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Marketing (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Primary Health Care (AREA)
  • Human Resources & Organizations (AREA)
  • Development Economics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a makeup evaluation system which evaluates the makeup on a face included in an image and an operation method thereof, and may comprise: a mobile terminal which captures a facial image and transmits the captured facial image to the makeup server; and a makeup server which stores makeup score data and, when the server receives the facial image from the mobile terminal, detects at least one facial area in the facial image, calculates a makeup score for each facial area detected on the basis of the makeup score data and transmits the calculated makeup score to the mobile terminal.

Description

메이크업 평가 시스템 및 그의 동작 방법Makeup evaluation system and its operation method
본 발명은 메이크업 평가 시스템 및 그의 동작 방법에 관한 것이다.The present invention relates to a makeup evaluation system and a method of operation thereof.
뷰티 산업(beauty industry)의 발전에 따라, 사용자의 화장품, 메이크업(make-up) 등에 대한 관심이 증가하고 있다. 이에 따라, 사용자의 화장품, 메이크업 등에 대한 욕구가 다양해지고 있는 추세이다. With the development of the beauty industry, interest in cosmetics, make-up, and the like of users is increasing. Accordingly, the user's desire for cosmetics, makeup and the like is diversified.
한편, 사용자 개인마다 피부 색상, 얼굴형, 이목구비의 생김새 등이 다양하기 때문에 개인에게 어울리는 메이크업이 다를 수 있다. 따라서, 사용자는 자신에게 어울리는 메이크업에 대한 선택의 어려움을 가질 수 있다. 사용자는 메이크업 후에 메이크업이 잘 된 것인지, 어떤 부분을 보완해야 하는지 궁금할 수 있다.On the other hand, since the user's skin color, facial shape, appearance of the neck and neck vary, the makeup that suits the individual may be different. Therefore, the user may have difficulty in selecting a makeup that is suitable for them. The user may wonder whether the makeup is good after makeup and what parts should be supplemented.
이러한 추세에 맞추어, 최근에는 사용자의 얼굴에 가상의 메이크업을 입혀주는 어플리케이션(application) 등이 개발되고 있다. 이 경우, 사용자의 호기심과 흥미를 유발할 수는 있으나, 어떤 메이크업이 사용자에게 어울리는지는 개인이 판단해야 하는 한계가 있다. 즉, 사용자 개개인에 대한 맞춤형 서비스를 제공하기에는 어려움이 있다. 예를 들어, 현재 제공되는 뷰티 서비스의 경우에는 메이크업 전문가의 협업이 포함되지 않거나, 메이크업 전문가의 협업이 있다고 하더라도 한정적인 데이터에 기초하고 있다. 따라서, 사용자 개개인에 대한 맞춤형 서비스를 제공하기에는 어려움이 있다.In accordance with this trend, recently, an application for applying virtual makeup to a user's face has been developed. In this case, although it may cause curiosity and interest of the user, there is a limit that an individual must determine which makeup is suitable for the user. That is, it is difficult to provide a personalized service for each user. For example, currently available beauty services do not include collaboration of makeup specialists or are based on limited data, even if there are collaborations of makeup specialists. Therefore, it is difficult to provide a personalized service for each user.
한편, 최근에는 기계 학습(Machine Learning, ML) 기술, 특히 딥러닝(Deep Learning, DL) 기술의 이용 분야가 확장되고 있다.In recent years, the field of use of machine learning (ML) technology, especially deep learning (DL) technology, has been expanded.
기계 학습 기술이란 많은 데이터를 통해 특징을 추출하고, 새로운 데이터가 입력되면 컴퓨터가 스스로 특징에 따라 분류할 수 있는 기술이다.Machine learning technology is a technology that extracts features from a lot of data, and when a new data is input, the computer can classify itself according to the features.
딥러닝 기술은 기계 학습 기술의 일부이다. 딥러닝 기술은 인공지능을 구성하기 위한 인공신경망 ANN(Artificial Neural Networks)을 기반으로, 빅데이터 속에서 패턴을 발견하여 인간이 사물을 구분하듯 컴퓨터가 데이터를 분류하는 기술을 말한다.Deep learning technology is part of machine learning technology. Deep learning technology is based on artificial neural networks (ANNs) for constructing artificial intelligence, and refers to a technology in which computers classify data as humans distinguish objects by finding patterns in big data.
딥러닝 기술을 메이크업 서비스에 적용함으로써, 보다 객관적인 데이터를 바탕으로 사용자에게 맞춤형 메이크업을 제공할 수 있는 것으로 생각된다.By applying deep learning technology to the makeup service, it is thought that it is possible to provide customized makeup to users based on more objective data.
본 발명은 사용자의 메이크업을 평가하는 메이크업 평가 시스템 및 그의 동작 방법을 제공하고자 한다.The present invention is to provide a makeup evaluation system for evaluating a user's makeup and its operation method.
본 발명은 사진 속 메이크업을 분석하여, 메이크업의 우수성을 수치로 제공하는 메이크업 평가 시스템 및 그의 동작 방법을 제공하고자 한다.The present invention aims to provide a makeup evaluation system and a method of operating the same, which analyzes makeup in a photo and provides a numerical value of makeup excellence.
보다 구체적으로, 본 발명은 메이크업 전문가의 평가에 기초한 신뢰성 있는 점수 데이터를 통해 메이크업을 평가하는 메이크업 평가 시스템 및 그의 동작 방법을 제공하고자 한다.More specifically, the present invention seeks to provide a makeup evaluation system and a method of operation thereof for evaluating makeup through reliable score data based on the makeup expert's evaluation.
본 발명은 기계학습 기술을 통해 메이크업을 자동으로 평가하는 데이터 베이스를 구축하고 있는 메이크업 평가 시스템 및 그의 동작 방법을 제공하고자 한다.The present invention aims to provide a makeup evaluation system and a method of operating the same, which are constructing a database for automatically evaluating makeup through machine learning technology.
본 발명은 사용자의 얼굴 각 부위별로 메이크업을 평가하는 메이크업 평가 시스템 및 그의 동작 방법을 제공하고자 한다.The present invention is to provide a makeup evaluation system and its operation method for evaluating makeup for each part of the user's face.
본 발명의 실시 예에 따른 메이크업 평가 시스템은 얼굴 이미지를 촬영하고, 촬영한 얼굴 이미지를 메이크업 서버로 전송하는 이동 단말기 및 메이크업 점수 데이터를 저장하고 있고, 이동 단말기로부터 얼굴 이미지를 수신하면 얼굴 이미지에서 적어도 하나 이상의 얼굴 영역을 검출하고, 메이크업 점수 데이터에 기초하여 검출된 얼굴 영역별 메이크업 점수를 산출하고, 산출된 메이크업 점수를 이동 단말기로 전송하는 메이크업 서버를 포함하고, 메이크업 서버는 이동 단말기로부터 메이크업 주제를 수신하면 메이크업 주제에 따라 메이크업 점수를 산출하고, 메이크업 점수는 검출된 얼굴 영역의 모양 및 메이크업 주제에 따라 상이하게 산출될 수 있다.The makeup evaluation system according to an embodiment of the present invention stores a mobile terminal and a makeup score data for photographing a face image and transmitting the photographed face image to a makeup server, and when receiving the face image from the mobile terminal, A makeup server that detects one or more facial regions, calculates a makeup score for each detected facial region based on the makeup score data, and transmits the calculated makeup score to a mobile terminal, wherein the makeup server receives a makeup subject from the mobile terminal. Upon receipt, a makeup score may be calculated according to the makeup subject, and the makeup score may be calculated differently according to the shape of the detected face region and the makeup subject.
본 발명의 실시 예에 따르면, 사용자에게 보다 신뢰성 있는 메이크업 평가 서비스를 제공할 수 있는 효과가 있다. 구체적으로, 본 발명의 실시 예에 따르면, 실제 메이크업 전문가의 평가와 유사한 메이크업 평가 서비스를 제공할 수 있는 효과가 있다.According to an embodiment of the present invention, there is an effect that can provide a more reliable makeup evaluation service to the user. Specifically, according to an embodiment of the present invention, there is an effect that can provide a makeup evaluation service similar to the evaluation of the actual makeup professional.
본 발명의 실시 예에 따르면, 얼굴의 각 영역을 검출하여 알고리즘을 적용함으로써 메이크업의 평가가 가능한 이점이 있다. 보다 상세하게는, 사용자마다 얼굴의 크기, 모양 등이 상이하여 동일한 메이크업을 하더라도 어울리는 사람이 있고, 어울리지 않는 사람이 있을 수 있다. 따라서, 본 발명에서는 특히 얼굴 각 부위 영역을 추출하고, 추출된 영역에 알고리즘을 적용하여 사용자의 얼굴 특성을 고려하는 메이크업 평가가 수행될 수 있어 정밀한 메이크업 평가가 가능한 이점이 있다.According to an embodiment of the present invention, the makeup may be evaluated by detecting an area of the face and applying an algorithm. In more detail, even if the user wears the same makeup because the size, shape, etc. of the face is different for each user, there may be a person who suits and a person who does not. Therefore, in the present invention, in particular, each region of the face is extracted, and makeup evaluation considering the characteristics of the user may be performed by applying an algorithm to the extracted region, thereby making it possible to precisely evaluate the makeup.
본 발명의 실시 예에 따르면, 얼굴의 각 부위 영역을 추출하고, 추출된 영역마다 상이한 알고리즘을 적용하여 해당 부위마다 메이크업이 잘 되었는지 평가 가능하며, 얼굴 각 부위를 종합한 전체적인 메이크업 점수를 평가 가능한 이점이 있다.According to an embodiment of the present invention, by extracting each region of the face and applying a different algorithm for each extracted region, it is possible to evaluate whether the makeup is good for each region, and to evaluate the overall make-up score for each region of the face. There is this.
본 발명의 실시 예에 따르면, 영역의 RGB 값을 이용하여 얼굴 영역을 보다 정확하게 인식 가능한 이점이 있다.According to an embodiment of the present invention, there is an advantage that the face area can be recognized more accurately by using the RGB value of the area.
본 발명의 실시 예에 따르면, 디스플레이부의 특성 등과 관계 없이 동일한 값을 나타내는 Lab 값을 이용하여 메이크업을 평가함으로써, 이동 단말기의 모델 등과 같은 평가 매체에 관계없이 객관적인 메이크업 평가가 가능한 이점이 있다. According to an embodiment of the present invention, by evaluating makeup using a Lab value representing the same value regardless of the characteristics of the display unit, an objective makeup evaluation may be performed regardless of an evaluation medium such as a model of a mobile terminal.
본 발명의 실시 예에 따르면, 단순히 메이크업의 색상을 평가할 뿐만 아니라, 색상 조화, 색상의 균일성 등 세밀한 메이크업 평가가 가능한 이점이 있다.According to the exemplary embodiment of the present invention, not only the color of the makeup is evaluated, but also fine makeup evaluation such as color harmony and color uniformity has an advantage of being possible.
도 1은 본 발명의 실시 예에 따른 메이크업 평가 시스템의 구성을 도시한 블록도이다.1 is a block diagram showing the configuration of a makeup evaluation system according to an embodiment of the present invention.
도 2는 본 발명의 실시 예에 따른 이동 단말기를 설명하기 위한 블록도이다.2 is a block diagram illustrating a mobile terminal according to an exemplary embodiment of the present invention.
도 3은 본 발명의 제1 실시 예에 따른 메이크업 서버를 설명하기 위한 블록도이다.3 is a block diagram illustrating a makeup server according to a first embodiment of the present invention.
도 4는 본 발명의 제1 실시 예에 따른 메이크업 평가 시스템의 동작 방법을 나타내는 래더 다이어그램이다.4 is a ladder diagram illustrating a method of operating a makeup evaluation system according to a first embodiment of the present invention.
도 5는 본 발명의 제1 실시 예에 따른 메이크업 서버가 이미지 데이터를 수신하는 방법을 설명하기 위한 예시 도면이다.5 is an exemplary diagram for describing a method for receiving, by the makeup server according to the first embodiment of the present invention, image data.
도 6은 본 발명의 제1 실시 예에 따른 메이크업 점수 데이터를 설명하기 위한 예시 도면이다.6 is an exemplary diagram for describing makeup score data according to a first embodiment of the present invention.
도 7 내지 도 8은 본 발명의 제1 실시 예에 따른 메이크업 점수 데이터를 튜닝하는 방법을 설명하기 위한 예시 도면이다.7 to 8 are exemplary diagrams for describing a method of tuning makeup score data according to a first embodiment of the present invention.
도 9는 본 발명의 제1 실시 예에 따라 생성된 메이크업 평가 데이터베이스를 설명하기 위한 도면이다.9 is a diagram for describing a makeup evaluation database generated according to a first embodiment of the present invention.
도 10은 본 발명의 제1 실시 예에 따른 메이크업 평가 요청 신호를 전송하기 위한 화면을 설명하기 위한 예시 도면이다.10 is an exemplary view for explaining a screen for transmitting a makeup evaluation request signal according to a first embodiment of the present invention.
도 11은 본 발명의 제1 실시 예에 따른 메이크업 서버가 얼굴 이미지를 분석하는 방법을 설명하기 위한 예시 도면이다.FIG. 11 is a diagram for describing a method of analyzing a face image by a makeup server according to the first exemplary embodiment of the present invention.
도 12는 본 발명의 제1 실시 예에 따른 메이크업 분석부가 얼굴 이미지의 메이크업을 분석하는 방법을 설명하기 위한 예시 도면이다.12 is an exemplary diagram for describing a method of analyzing a makeup of a face image by the makeup analyzer according to the first embodiment of the present disclosure.
도 13은 본 발명의 제1 실시 예에 따른 재 생성된 메이크업 평가 데이터 베이스를 설명하기 위한 예시 도면이다.FIG. 13 is an exemplary diagram for describing a recreated makeup evaluation database according to the first embodiment of the present invention. FIG.
도 14a 내지 도 14b는 본 발명의 제1 실시 예에 따른 메이크업 점수 화면을 설명하기 위한 예시 도면이다.14A to 14B are exemplary diagrams for describing a makeup score screen according to a first embodiment of the present invention.
도 15는 본 발명의 제1 실시 예에 따른 메이크업 주제가 메이크업 평가에 미치는 영향을 설명하기 위한 도면이다.15 is a view for explaining the effect of the makeup theme according to the first embodiment of the present invention on the makeup evaluation.
도 16은 본 발명의 제1 실시 예에 따른 영역별 점수 윈도우를 설명하기 위한 도면이다.16 is a diagram for describing a score window for each region according to the first embodiment of the present invention.
도 17은 본 발명의 제1 실시 예에 따른 메이크업의 발란스 평가를 설명하기 위한 도면이다.17 is a view for explaining a balance evaluation of makeup according to the first embodiment of the present invention.
도 18은 본 발명의 제1 실시 예에 따른 노메이크업 평가 결과를 설명하기 위한 도면이다.18 is a view for explaining a no-makeup evaluation result according to the first embodiment of the present invention.
도 19는 본 발명의 제2 실시 예에 따른 메이크업 서버를 설명하기 위한 블록도이다.19 is a block diagram illustrating a makeup server according to a second embodiment of the present invention.
도 20은 본 발명의 제2 실시 예에 따른 메이크업 평가 시스템의 동작 방법을 나타내는 래더 다이어그램이다.20 is a ladder diagram illustrating a method of operating a makeup evaluation system according to a second embodiment of the present invention.
도 21 내지 도 25는 본 발명의 제2 실시 예에 따른 눈썹 부문 평가시 적용되는 평가 알고리즘을 설명하기 위한 도면이다. 21 to 25 are diagrams for explaining an evaluation algorithm applied to the evaluation of the eyebrow section according to the second embodiment of the present invention.
도 26은 본 발명의 제2 실시 예에 따른 눈썹 부문의 평가결과를 표시하는 방법을 설명하기 위한 예시 도면이다.FIG. 26 is an exemplary diagram for describing a method of displaying an evaluation result of an eyebrow section according to a second exemplary embodiment of the present invention.
도 27 내지 도 28은 본 발명의 제2 실시 예에 따른 다크서클 부문 평가시 적용되는 평가 알고리즘을 설명하기 위한 도면이다. 27 to 28 are diagrams for explaining an evaluation algorithm applied to the evaluation of the dark circle section according to the second embodiment of the present invention.
도 29는 본 발명의 제2 실시 예에 따른 다크서클 부문의 평가결과를 표시하는 방법을 설명하기 위한 예시 도면이다.FIG. 29 is a diagram illustrating a method of displaying an evaluation result of a dark circle according to a second embodiment of the present invention.
도 30 내지 도 34는 본 발명의 제2 실시 예에 따른 색상조화 부문 평가시 적용되는 평가 알고리즘을 설명하기 위한 도면이다. 30 to 34 are diagrams for explaining an evaluation algorithm applied when evaluating a color matching section according to the second embodiment of the present invention.
도 35는 본 발명의 제2 실시 예에 따른 색상조화 부문의 평가결과를 표시하는 방법을 설명하기 위한 예시 도면이다.FIG. 35 is a diagram for describing a method of displaying an evaluation result of a color matching unit according to a second embodiment of the present invention.
도 36은 본 발명의 제2 실시 예에 따른 입술 부문 평가시 적용되는 평가 알고리즘을 설명하기 위한 도면이다. FIG. 36 is a diagram illustrating an evaluation algorithm applied to a lip evaluation according to a second embodiment of the present invention.
도 37은 본 발명의 제2 실시 예에 따른 입술 부문의 평가결과를 표시하는 방법을 설명하기 위한 예시 도면이다.FIG. 37 is a view illustrating a method of displaying an evaluation result of a lip section according to a second exemplary embodiment of the present invention.
도 38은 본 발명의 제2 실시 예에 따른 잡티 부문 평가시 적용되는 평가 알고리즘을 설명하기 위한 도면이다. 38 is a diagram illustrating an evaluation algorithm applied when evaluating a blemish section according to a second embodiment of the present invention.
도 39는 본 발명의 제2 실시 예에 따른 잡티 부문의 평가결과를 표시하는 방법을 설명하기 위한 예시 도면이다.39 is an exemplary view for explaining a method of displaying an evaluation result of a blemish portion according to a second embodiment of the present invention.
도 40은 본 발명의 제2 실시 예에 따른 메이크업 평가결과를 표시하는 방법을 설명하기 위한 예시 도면이다.40 is an exemplary view for explaining a method of displaying a makeup evaluation result according to a second embodiment of the present invention.
이하, 첨부된 도면을 참조하여 본 명세서에 개시된 실시 예를 상세히 설명하되, 도면 부호에 관계없이 동일하거나 유사한 구성요소는 동일한 참조 번호를 부여하고 이에 대한 중복되는 설명은 생략하기로 한다. 이하의 설명에서 사용되는 구성요소에 대한 접미사 "모듈" 및 "부"는 명세서 작성의 용이함만이 고려되어 부여되거나 혼용되는 것으로서, 그 자체로 서로 구별되는 의미 또는 역할을 갖는 것은 아니다. 또한, 본 명세서에 개시된 실시 예를 설명함에 있어서 관련된 공지 기술에 대한 구체적인 설명이 본 명세서에 개시된 실시 예의 요지를 흐릴 수 있다고 판단되는 경우 그 상세한 설명을 생략한다. 또한, 첨부된 도면은 본 명세서에 개시된 실시 예를 쉽게 이해할 수 있도록 하기 위한 것일 뿐, 첨부된 도면에 의해 본 명세서에 개시된 기술적 사상이 제한되지 않으며, 본 발명의 사상 및 기술 범위에 포함되는 모든 변경, 균등물 내지 대체물을 포함하는 것으로 이해되어야 한다. Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings, and the same or similar components are denoted by the same reference numerals regardless of the reference numerals, and redundant description thereof will be omitted. The suffixes "module" and "unit" for components used in the following description are given or used in consideration of ease of specification, and do not have distinct meanings or roles from each other. In addition, in describing the embodiments disclosed herein, when it is determined that the detailed description of the related known technology may obscure the gist of the embodiments disclosed herein, the detailed description thereof will be omitted. In addition, the accompanying drawings are intended to facilitate understanding of the embodiments disclosed herein, but are not limited to the technical spirit disclosed herein by the accompanying drawings, all changes included in the spirit and scope of the present invention. It should be understood to include equivalents and substitutes.
제1, 제2 등과 같이 서수를 포함하는 용어는 다양한 구성요소들을 설명하는데 사용될 수 있지만, 상기 구성요소들은 상기 용어들에 의해 한정되지는 않는다. 상기 용어들은 하나의 구성요소를 다른 구성요소로부터 구별하는 목적으로만 사용된다.Terms including ordinal numbers such as first and second may be used to describe various components, but the components are not limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
어떤 구성요소가 다른 구성요소에 "연결되어" 있다거나 "접속되어" 있다고 언급된 때에는, 그 다른 구성요소에 직접적으로 연결되어 있거나 또는 접속되어 있을 수도 있지만, 중간에 다른 구성요소가 존재할 수도 있다고 이해되어야 할 것이다. 반면에, 어떤 구성요소가 다른 구성요소에 "직접 연결되어" 있다거나 "직접 접속되어" 있다고 언급된 때에는, 중간에 다른 구성요소가 존재하지 않는 것으로 이해되어야 할 것이다.When a component is referred to as being "connected" or "connected" to another component, it may be directly connected to or connected to that other component, but it may be understood that other components may be present in between. Should be. On the other hand, when a component is said to be "directly connected" or "directly connected" to another component, it should be understood that there is no other component in between.
단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함한다. Singular expressions include plural expressions unless the context clearly indicates otherwise.
본 출원에서, "포함한다" 또는 "가지다" 등의 용어는 명세서상에 기재된 특징, 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것이 존재함을 지정하려는 것이지, 하나 또는 그 이상의 다른 특징들이나 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것들의 존재 또는 부가 가능성을 미리 배제하지 않는 것으로 이해되어야 한다.In this application, the terms "comprises" or "having" are intended to indicate that there is a feature, number, step, operation, component, part, or combination thereof described in the specification, and one or more other features. It is to be understood that the present invention does not exclude the possibility of the presence or the addition of numbers, steps, operations, components, components, or a combination thereof.
다음으로, 도 1 내지 도 40을 참조하여 본 발명의 실시 예에 따른 메이크업(make-up) 평가 시스템 및 그의 동작 방법을 설명한다.Next, a make-up evaluation system and an operation method thereof according to an embodiment of the present invention will be described with reference to FIGS. 1 to 40.
먼저, 도 1은 본 발명의 실시 예에 따른 메이크업 평가 시스템의 구성을 도시한 블록도이다.First, Figure 1 is a block diagram showing the configuration of a makeup evaluation system according to an embodiment of the present invention.
도 1을 참조하면, 본 발명의 실시 예에 따른 메이크업 평가 시스템은 이동 단말기(10), 어플리케이션 서버(200) 및 메이크업 서버(100)로 구성될 수 있다.Referring to FIG. 1, the makeup evaluation system according to an exemplary embodiment of the present invention may include a mobile terminal 10, an application server 200, and a makeup server 100.
이동 단말기(10)는 메이크업 평가를 요청할 수 있다. 이동 단말기(10)는 적어도 하나의 메이크업 이미지와 관련하여 메이크업 평가를 요청하고, 메이크업 평가 결과를 표시할 수 있다.The mobile terminal 10 may request a makeup evaluation. The mobile terminal 10 may request makeup evaluation in relation to at least one makeup image and display a makeup evaluation result.
어플리케이션 서버(200)는 메이크업 평가를 위한 어플리케이션 작동에 이용되는 구성요소로, 메이크업 평가를 위한 어플리케이션 작동에 필요한 정보를 저장하고 있을 수 있다.The application server 200 is a component used for operating an application for makeup evaluation and may store information necessary for operating an application for makeup evaluation.
어플리케이션 서버(200)는 메이크업 평가 어플리케이션의 실행에 따라 이동 단말기(10) 및 메이크업 서버(100) 중 적어도 하나 이상과 신호 및 데이터를 송수신할 수 있다.The application server 200 may transmit and receive signals and data with at least one of the mobile terminal 10 and the makeup server 100 according to the execution of the makeup evaluation application.
메이크업 서버(100)는 메이크업 평가에 필요한 데이터를 저장하고 있을 수 있다. 예를 들어, 메이크업 서버(100)는 얼굴의 각 부위를 식별하기 위한 데이터, 메이크업을 평가하기 위한 평가 알고리즘 등을 저장하고 있을 수 있다.The makeup server 100 may store data required for makeup evaluation. For example, the makeup server 100 may store data for identifying each part of the face, an evaluation algorithm for evaluating makeup, and the like.
메이크업 서버(100)는 저장된 데이터에 기초하여 메이크업을 평가하거나, 메이크업 평가에 필요한 정보를 이동 단말기(10) 또는 어플리케이션 서버(200)로 전송할 수 있다. 메이크업 서버(100)는 메이크업의 평가 결과 정보를 포함하는 평가 결과 신호를 이동 단말기(10)로 전송할 수 있다.The makeup server 100 may evaluate makeup based on the stored data or transmit information necessary for makeup evaluation to the mobile terminal 10 or the application server 200. The makeup server 100 may transmit an evaluation result signal including evaluation result information of makeup to the mobile terminal 10.
이동 단말기(10), 어플리케이션 서버(200) 및 메이크업 서버(100)는 상호간에 신호를 송수신할 수 있다. The mobile terminal 10, the application server 200, and the makeup server 100 may transmit and receive signals to each other.
이동 단말기(10)는 어플리케이션 서버(200)로 메이크업 평가 요청 신호를 전송할 수 있고, 어플리케이션 서버(200)는 메이크업 평가 요청 신호를 수신하면, 수신된 메이크업 평가 요청 신호에 대응하는 메이크업 이미지를 메이크업 서버(100)로 전송할 수 있다.The mobile terminal 10 may transmit a makeup evaluation request signal to the application server 200, and when the application server 200 receives the makeup evaluation request signal, the mobile terminal 10 transmits a makeup image corresponding to the received makeup evaluation request signal to the makeup server ( 100).
일 실시 예에 따르면, 메이크업 서버(100)는 메이크업 평가 요청 신호를 수신하면 저장된 데이터에 기초하여 수신된 이미지의 메이크업을 평가하고, 평가 결과를 어플리케이션 서버(200)로 전송할 수 있다. 어플리케이션 서버(200)는 이동 단말기(10)로 평가 결과를 전송할 수 있다.According to an embodiment of the present disclosure, when the makeup server 100 receives the makeup evaluation request signal, the makeup server 100 may evaluate makeup of the received image based on the stored data, and transmit the evaluation result to the application server 200. The application server 200 may transmit the evaluation result to the mobile terminal 10.
그러나, 실시 예에 따라, 어플리케이션 서버(200)와 메이크업 서버(100)는 별개로 분리되지 않고, 하나의 서버로서 이동 단말기(10)와 신호를 송수신할 수 있다. 예를 들어, 어플리케이션 서버(200)는 메이크업 서버(100)에 포함될 수 있다. 이 경우, 이동 단말기(10)는 메이크업 서버(100)에 메이크업 평가 요청 신호를 전송하고, 메이크업 서버(100)는 메이크업을 평가하여 이동 단말기(10)에 평가 결과 데이터를 전송할 수 있다.However, according to an exemplary embodiment, the application server 200 and the makeup server 100 may not be separately separated, and may transmit and receive a signal to and from the mobile terminal 10 as one server. For example, the application server 200 may be included in the makeup server 100. In this case, the mobile terminal 10 may transmit a makeup evaluation request signal to the makeup server 100, and the makeup server 100 may evaluate makeup and transmit evaluation result data to the mobile terminal 10.
다른 실시 예에 따르면, 메이크업 서버(100)는 메이크업 평가 요청 신호를 수신하면 수신된 메이크업 평가 요청 신호와 관련된 데이터를 이동 단말기(10)로 전송하고, 이동 단말기(10)는 수신된 데이터에 기초하여 메이크업을 평가한다.According to another embodiment, when the makeup server 100 receives the makeup evaluation request signal, the makeup server 100 transmits data related to the received makeup evaluation request signal to the mobile terminal 10, and the mobile terminal 10 based on the received data. Evaluate your makeup.
본 명세서에서 설명되는 이동 단말기(10)에는 휴대폰, 스마트 폰(smart phone), 컴퓨터(computer), 노트북 컴퓨터(notebook computer), 태블릿 PC(tablet PC), 웨어러블 디바이스(wearable device), 디지털 TV, 디지털 사이니지, 화장품 등과 같은 뷰티 관련 제품을 판매하는 매장에 구비되는 디스플레이 장치, 가정 또는 매장 등에 구비되는 스마트미러 등이 포함될 수 있다.The mobile terminal 10 described herein includes a mobile phone, a smart phone, a computer, a notebook computer, a tablet PC, a wearable device, a digital TV, a digital It may include a display device provided in a store selling beauty related products such as signage, cosmetics, and the like, and a smart mirror provided in a home or a store.
도 2는 본 발명의 실시 예에 따른 이동 단말기를 설명하기 위한 블록도이다.2 is a block diagram illustrating a mobile terminal according to an exemplary embodiment of the present invention.
이동 단말기(10)는 무선 통신부(11), 입력부(12), 카메라(13), 디스플레이부(14), 메모리(15), 전원 공급부(16) 및 제어부(17)를 포함할 수 있다. 이와 같이 도 2에 도시된 구성요소들은 본 발명에 따른 이동 단말기의 이해를 돕기 위해 예시로 든 구성요소로서, 이동 단말기는 위에서 열거된 구성요소들 보다 더 많거나, 또는 더 적은 구성요소들을 가질 수 있다.The mobile terminal 10 may include a wireless communication unit 11, an input unit 12, a camera 13, a display unit 14, a memory 15, a power supply unit 16, and a control unit 17. As such, the components shown in FIG. 2 are exemplified to help understanding of the mobile terminal according to the present invention, and the mobile terminal may have more or fewer components than those listed above. have.
이하, 이동 단말기(10)의 각 구성요소들을 보다 구체적으로 설명한다.Hereinafter, each component of the mobile terminal 10 will be described in more detail.
무선 통신부(11)는 이동 단말기(10)와 다른 이동 단말기(10) 사이, 또는 이동 단말기(100)와 외부 서버 사이의 무선 통신을 가능하게 하는 하나 이상의 모듈을 포함할 수 있다. 구체적으로, 무선 통신부(11)는 방송 수신 모듈, 이동통신 모듈, 무선 인터넷 모듈, 근거리 통신 모듈, 위치정보 모듈 중 적어도 하나 이상을 포함할 수 있다.The wireless communication unit 11 may include one or more modules that enable wireless communication between the mobile terminal 10 and another mobile terminal 10 or between the mobile terminal 100 and an external server. In detail, the wireless communication unit 11 may include at least one of a broadcast receiving module, a mobile communication module, a wireless internet module, a short range communication module, and a location information module.
무선 통신부(11)는 다른 이동 단말기(10) 또는 외부 서버와 신호를 송수신할 수 있다. 예를 들어, 무선 통신부(11)는 어플리케이션 서버(200) 및 메이크업 서버(100) 중 적어도 하나 이상과 신호를 송수신할 수 있다.The wireless communication unit 11 may transmit / receive a signal with another mobile terminal 10 or an external server. For example, the wireless communication unit 11 may transmit and receive a signal with at least one of the application server 200 and the makeup server 100.
입력부(12)는 사용자로부터 데이터 또는 명령 등을 수신할 수 있다. 입력부는 푸시키(mechanical key), 터치키(touch key), 음성 인식 등을 통해 입력 신호를 수신할 수 있다. 입력부(12)를 통해 수신된 데이터 또는 명령은 제어명령으로 처리되어, 다른 구성요소로 전달될 수 있다.The input unit 12 may receive data or a command from a user. The input unit may receive an input signal through a mechanical key, a touch key, voice recognition, or the like. The data or command received through the input unit 12 may be processed as a control command and transmitted to other components.
카메라(13)는 영상 신호 입력을 수신할 수 있다. 영상 신호는 사진과 같은 정지 이미지, 동영상 등을 포함한다. 따라서, 카메라(13)는 사진, 동영상 등을 촬영함으로써 영상 신호 입력을 수신할 수 있다. 예를 들어, 카메라(13)는 사용자의 얼굴 이미지를 촬영할 수 있다.The camera 13 may receive an image signal input. The video signal includes still images such as photographs, moving pictures, and the like. Accordingly, the camera 13 may receive an image signal input by taking a picture, a video, or the like. For example, the camera 13 may capture a face image of the user.
디스플레이부(14)는 이동 단말기(10)에서 처리되는 정보를 표시(출력)한다. 예를 들어, 디스플레이부(14)는 사용자에게 제공하는 내용으로, 무선 통신부(11)를 통해 수신되거나 입력부(12)를 통해 입력된 내용 등을 화면에 표시할 수 있다. 또한, 디스플레이부(14)는 이동 단말기(10)에서 구동되는 응용 프로그램의 화면 정보를 표시할 수 있다.The display unit 14 displays (outputs) information processed by the mobile terminal 10. For example, the display unit 14 may provide content to a user and display content received through the wireless communication unit 11 or input through the input unit 12 on the screen. In addition, the display 14 may display screen information of an application program driven in the mobile terminal 10.
또는, 디스플레이부(14)는 카메라(13)를 통해 촬영 중이거나 촬영된 이미지를 표시할 수 있다. 예를 들어, 디스플레이부(14)는 카메라(13)를 통해 촬영된 얼굴 이미지를 표시할 수 있다.Alternatively, the display unit 14 may display an image being photographed or photographed by the camera 13. For example, the display 14 may display a face image photographed through the camera 13.
또한, 디스플레이부(14)는 촬영된 얼굴 이미지에 기초하여 메이크업을 평가한 결과 정보를 표시할 수 있다.In addition, the display 14 may display the result of evaluating the makeup based on the photographed face image.
한편, 디스플레이부(151)는 터치 센서와 상호 레이어 구조를 이루거나 일체형으로 형성됨으로써, 터치 스크린을 구현할 수 있다. 이러한 터치 스크린은, 입력부(12)로써 기능함과 동시에, 이동 단말기(10)와 사용자 사이의 출력 인터페이스를 제공할 수 있다.On the other hand, the display unit 151 may form a mutual layer structure or integrally formed with the touch sensor, thereby implementing a touch screen. Such a touch screen may function as the input unit 12 and provide an output interface between the mobile terminal 10 and the user.
메모리(15)는 이동 단말기(10)의 다양한 기능을 지원하는 데이터를 저장한다. 메모리(15)는 이동 단말기(10)에서 구동되는 다수의 응용 프로그램(application program 또는 애플리케이션(application)), 이동 단말기(10)의 동작을 위한 데이터들, 명령어들을 저장할 수 있다. 이러한 응용 프로그램 중 적어도 일부는, 무선 통신을 통해 외부 서버로부터 다운로드 될 수 있다. 또는, 이러한 응용 프로그램 중 적어도 일부는, 이동 단말기(10)의 기본적인 기능(예를 들어, 전화 착신, 발신 기능, 메시지 수신, 발신 기능)을 위하여 출고 당시부터 이동 단말기(10)상에 존재할 수 있다. The memory 15 stores data supporting various functions of the mobile terminal 10. The memory 15 may store a plurality of application programs or applications that are driven by the mobile terminal 10, data for operating the mobile terminal 10, and instructions. At least some of these applications may be downloaded from an external server via wireless communication. Alternatively, at least some of these application programs may exist on the mobile terminal 10 from the time of shipment for basic functions of the mobile terminal 10 (for example, a call forwarding, a calling function, a message receiving, and a calling function). .
한편, 이러한 응용 프로그램 중 적어도 하나는 메이크업 평가를 위한 어플리케이션일 수 있다.Meanwhile, at least one of these application programs may be an application for makeup evaluation.
전원공급부(16)는 외부의 전원 또는 내부의 전원을 인가 받아 이동 단말기(10)에 포함된 각 구성요소들에 전원을 공급한다. 이러한 전원공급부(190)는 배터리를 포함하며, 배터리는 내장형 배터리 또는 교체 가능한 형태의 배터리가 될 수 있다.The power supply unit 16 receives power from an external power source or an internal power source to supply power to each component included in the mobile terminal 10. The power supply unit 190 includes a battery, and the battery may be a built-in battery or a replaceable battery.
제어부(17)는 이동 단말기(10)의 전반적인 동작을 제어한다. 구체적으로, 제어부(17)는 이동 단말기(10)를 구성하는 각 구성요소들의 동작 또는 응용 프로그램과 관련된 동작을 제어할 수 있다. 제어부(17)는 위 구성요소들을 통해 입력 또는 출력되는 신호, 데이터, 정보 등을 처리하거나 메모리(15)에 저장된 응용 프로그램을 구동함으로써, 사용자에게 적절한 정보 또는 기능을 제공 또는 처리할 수 있다. 제어부(17)는 위 구성요소들 중 적어도 일부를 제어하거나, 적어도 둘 이상을 서로 조합하여 동작시킬 수 있다.The controller 17 controls the overall operation of the mobile terminal 10. In detail, the controller 17 may control an operation related to each component constituting the mobile terminal 10 or an operation related to an application program. The controller 17 may provide or process information or functions appropriate to a user by processing signals, data, information, or the like input or output through the above components or driving an application program stored in the memory 15. The controller 17 may control at least some of the above components or operate at least two or more in combination with each other.
도 2를 통해 설명한 위 구성요소들 중 적어도 일부는, 이하에서 설명되는 다양한 실시 예들에 따른 이동 단말기의 동작, 제어, 또는 제어방법을 구현하기 위하여 서로 협력하여 동작할 수 있다. 또한, 이동 단말기의 동작, 제어, 또는 제어방법은 메모리(15)에 저장된 적어도 하나의 응용 프로그램의 구동에 의하여 이동 단말기 상에서 구현될 수 있다.At least some of the above components described with reference to FIG. 2 may operate in cooperation with each other in order to implement an operation, control, or control method of the mobile terminal according to various embodiments described below. In addition, the operation, control, or control method of the mobile terminal may be implemented on the mobile terminal by driving at least one application program stored in the memory 15.
다음으로, 도 3은 본 발명의 제1 실시 예에 따른 메이크업 서버를 설명하기 위한 블록도이다.3 is a block diagram illustrating a makeup server according to a first embodiment of the present invention.
메이크업 서버(100)는 메이크업 DB 관리부(110), 메이크업 평가부(120), 제어부(130) 및 무선 통신부(140)로 구성될 수 있다.The makeup server 100 may include a makeup DB manager 110, a makeup evaluator 120, a controller 130, and a wireless communication unit 140.
먼저, 메이크업 DB 관리부(110)를 설명한다.First, the makeup DB management unit 110 will be described.
메이크업 DB 관리부(110)는 메이크업 주제 획득부(111), 메이크업 이미지 획득부(112), 점수 데이터 생성부(113), 메이크업 평가 DB(114) 및 메이크업 점수 학습부(115)로 구성될 수 있다.The makeup DB manager 110 may be configured as a makeup subject acquirer 111, a makeup image acquirer 112, a score data generator 113, a makeup evaluation DB 114, and a makeup score learner 115. .
메이크업 주제 획득부(111)는 데이터 분석 또는 사용자 입력을 통해 메이크업 주제를 결정한다. 메이크업은 전체적인 발란스(balance)가 중요하다. 또한, 시대의 흐름 등에 따라 유행하는 메이크업이 달라진다. 따라서, 메이크업 주제 획득부(111)는 온라인 상에 존재하는 데이터를 분석하여 메이크업 주제를 획득할 수 있다. 또는, 메이크업 주제 획득부(111)는 사용자로부터 메이크업 주제 입력을 수신하여 메이크업 주제를 획득할 수 있다. The makeup topic acquisition unit 111 determines a makeup topic through data analysis or user input. For makeup, overall balance is important. In addition, the makeup that is in vogue depends on the trend of the times. Therefore, the makeup subject obtaining unit 111 may obtain a makeup subject by analyzing data existing online. Alternatively, the makeup subject obtaining unit 111 may obtain a makeup subject by receiving a makeup subject input from the user.
메이크업 이미지 획득부(112)는 메이크업 평가의 기초가 되는 다수의 메이크업 이미지 데이터를 수신한다. 구체적으로, 메이크업 이미지 획득부(112)는 메이크업 주제에 따라 구분되는 복수의 메이크업 이미지 데이터를 수신할 수 있다. 또한, 메이크업 이미지 획득부(112)는 주제별 메이크업 이미지와 함께 노메이크업(no make-up) 이미지를 획득할 수 있다.The makeup image acquisition unit 112 receives a plurality of makeup image data that is a basis for makeup evaluation. In detail, the makeup image acquisition unit 112 may receive a plurality of makeup image data classified according to a makeup theme. In addition, the makeup image acquisition unit 112 may acquire a no make-up image together with the makeup image for each subject.
메이크업 이미지 획득부(112)를 통해 수신된 메이크업 이미지는 메이크업 평가 데이터 베이스(114)에 저장될 수 있다.The makeup image received through the makeup image acquisition unit 112 may be stored in the makeup evaluation database 114.
점수 데이터 생성부(113)는 메이크업 이미지 및 이에 대응하는 메이크업 점수를 포함하는 메이크업 점수 데이터를 생성한다. 메이크업 점수 데이터의 메이크업 이미지는 메이크업 이미지 획득부(112)를 통해 수신될 수 있다. 메이크업 점수 데이터의 메이크업 점수는 메이크업 전문가의 평가에 기초하여 형성될 수 있다. 구체적으로, 메이크업 점수 데이터는 메이크업 전문가의 메이크업 평가를 위한 입력에 기초하여 생성될 수 잇다. 예를 들어, 메이크업 점수는 얼굴 이미지별로 메이크업 전문가에 의해 입력될 수 있다. 또한, 메이크업 점수는 기계 학습에 의해 메이크업 서버(100) 스스로 산출한 점수를 포함할 수 있다.The score data generator 113 generates makeup score data including a makeup image and a makeup score corresponding thereto. The makeup image of the makeup score data may be received through the makeup image acquisition unit 112. The makeup score of the makeup score data may be formed based on the evaluation of the makeup professional. Specifically, the makeup score data may be generated based on input for makeup evaluation of the makeup expert. For example, the makeup score may be input by a makeup expert for each face image. In addition, the makeup score may include a score calculated by the makeup server 100 itself by machine learning.
또한, 점수 데이터 생성부(113)는 메이크업 평가 시스템의 오차율을 낮추기 위해 메이크업 점수 데이터를 튜닝할 수 있다. 또한, 점수 데이터 생성부(113)는 메이크업 평가 시스템의 객관성을 확보하기 위해 메이크업 점수 데이터의 신뢰도를 보정할 수 있다.In addition, the score data generator 113 may tune the makeup score data to lower the error rate of the makeup evaluation system. In addition, the score data generator 113 may correct the reliability of the makeup score data in order to secure objectivity of the makeup evaluation system.
메이크업 평가 DB(114)는 점수 데이터 생성부(113)를 통해 생성된 메이크업 점수 데이터를 저장한다. 메이크업 점수 데이터는 튜닝되거나 신뢰도 보정될 수 있다.The makeup evaluation DB 114 stores makeup score data generated through the score data generator 113. Makeup score data can be tuned or reliability corrected.
또한, 메이크업 평가 DB(114)는 점수 데이터 생성부(113)를 통해 생성된 메이크업 점수 데이터와 함께, 새로운 이미지와 관련하여 산출한 점수 데이터를 함께 저장할 수 있다. 이와 같이 메이크업 점수 데이터를 저장하고 있는 메이크업 평가 DB(114)를 이용하여 메이크업 점수 학습부(115)는 메이크업 점수 산출 방법을 기계 학습할 수 있다.In addition, the makeup evaluation DB 114 may store the score data calculated with respect to the new image together with the makeup score data generated through the score data generator 113. The makeup score learning unit 115 may machine learn the makeup score calculation method using the makeup evaluation DB 114 storing the makeup score data.
메이크업 점수 학습부(115)는 메이크업 평가 DB(114)에 기초하여 메이크업 점수 산출 방법을 기계 학습한다. 구체적으로, 메이크업 점수 학습부(115)는 실제 메이크업 전문가가 평가하는 방법과 유사하게 메이크업 점수를 산출하는 방법을 기계 학습할 수 있다. The makeup score learning unit 115 machine learns a makeup score calculation method based on the makeup evaluation DB 114. In detail, the makeup score learner 115 may machine-learn a method of calculating the makeup score, similar to a method evaluated by a real makeup expert.
다음으로, 메이크업 평가부(120)를 구체적으로 설명한다.Next, the makeup evaluation unit 120 will be described in detail.
메이크업 평가부(120)는 얼굴 이미지 획득부(121), 메이크업 분석부(122) 및 메이크업 점수 출력부(123)로 구성될 수 있다.The makeup evaluator 120 may include a face image acquirer 121, a makeup analyzer 122, and a makeup score output unit 123.
얼굴 이미지 획득부(121)는 메이크업 평가의 대상이 되는 얼굴 이미지를 수신한다. 구체적으로, 얼굴 이미지 획득부(121)는 무선 통신부(140)를 통해 메이크업 평가의 대상이 되는 얼굴 이미지를 전달받을 수 있다.The face image acquirer 121 receives a face image that is a target of makeup evaluation. In detail, the face image acquisition unit 121 may receive a face image that is a target of makeup evaluation through the wireless communication unit 140.
메이크업 분석부(122)는 얼굴 이미지 획득부(121)가 수신한 얼굴 이미지의 메이크업을 분석한다. 메이크업 분석부(122)는 얼굴 이미지에 포함된 얼굴 각 영역의 메이크업을 분석할 수 있다. 예를 들어, 메이크업 분석부(122)는 얼굴 이미지와 메이크업 점수 데이터에 포함된 이미지 데이터를 비교하는 방법을 통해 메이크업을 분석할 수 있다. 즉, 메이크업 분석부(122)는 메이크업 점수 데이터의 통계적인 수치를 통해 메이크업을 분석할 수 있다. 구체적은 방법은 후술하기로 한다.The makeup analyzer 122 analyzes the makeup of the face image received by the face image acquirer 121. The makeup analyzer 122 may analyze makeup of each face included in the face image. For example, the makeup analyzer 122 may analyze makeup by comparing a face image with image data included in makeup score data. That is, the makeup analyzer 122 may analyze makeup through statistical values of makeup score data. A detailed method will be described later.
메이크업 점수 출력부(123)는 메이크업 분석 결과에 기초하여 얼굴 이미지의 메이크업 점수를 산출한다. 메이크업 점수 출력부(123)는 메이크업 종합 점수와 얼굴 영역별 점수를 산출할 수 있다.The makeup score output unit 123 calculates a makeup score of the face image based on the makeup analysis result. The makeup score output unit 123 may calculate a makeup comprehensive score and a score for each facial region.
제어부(130)는 메이크업 서버(100)의 전반적인 동작을 제어한다. 구체적으로, 제어부(130)는 메이크업 DB 관리부(110), 메이크업 평가부(120) 및 무선 통신부(140)의 동작을 제어할 수 있다.The controller 130 controls the overall operation of the makeup server 100. In detail, the controller 130 may control operations of the makeup DB manager 110, the makeup evaluator 120, and the wireless communicator 140.
무선 통신부(140)는 외부와 데이터를 송수신할 수 있다. 예를 들어, 무선 통신부(140)는 이동 단말기(10) 또는 어플리케이션 서버(200)로부터 이미지 데이터를 수신할 수 있다. 무선 통신부(140)는 수신된 이미지 데이터를 메이크업 DB 관리부(110)로 전달하거나 메이크업 평가부(120)로 전달할 수 있다.The wireless communication unit 140 may transmit and receive data with the outside. For example, the wireless communication unit 140 may receive image data from the mobile terminal 10 or the application server 200. The wireless communication unit 140 may transmit the received image data to the makeup DB management unit 110 or to the makeup evaluation unit 120.
한편, 이하에서 설명하는 실시 예는 예를 들어, 소프트웨어, 하드웨어 또는 이들의 조합된 것을 이용하여 컴퓨터 또는 이와 유사한 장치로 읽을 수 있는 기록매체 내에서 구현될 수 있다.Meanwhile, the embodiments described below may be implemented in a recording medium that may be read by a computer or a similar device using, for example, software, hardware, or a combination thereof.
다음으로, 도 4를 참조하여 본 발명의 제1 실시 예에 따른 메이크업 평가 시스템의 동작 방법을 설명한다. 도 4는 본 발명의 제1 실시 예에 따른 메이크업 평가 시스템의 동작 방법을 나타내는 래더 다이어그램이다.Next, an operation method of the makeup evaluation system according to the first embodiment of the present invention will be described with reference to FIG. 4. 4 is a ladder diagram illustrating a method of operating a makeup evaluation system according to a first embodiment of the present invention.
먼저, 메이크업 서버(100)의 메이크업 주제 획득부(111)는 메이크업의 주제를 결정할 수 있다(S101).First, the makeup subject acquirer 111 of the makeup server 100 may determine a subject of makeup (S101).
본 발명의 일 실시 예에 따르면, 메이크업 주제 획득부(111)는 무선 통신을 통해 메이크업의 주제를 결정할 수 있다.According to an embodiment of the present disclosure, the makeup topic acquisition unit 111 may determine a topic of makeup through wireless communication.
구체적으로, 메이크업 주제 획득부(111)는 온라인 상에서 뷰티 관련 데이터를 획득할 수 있다. 뷰티 관련 데이터는 메이크업 관련 검색어, 업로드된 메이크업 관련 컨텐츠, 메이크업 제품의 판매량 등을 포함할 수 있다. 메이크업 주제 획득부(111)는 획득된 뷰티 관련 데이터를 메이크업 주제별로 분석하여, 데이터의 양에 기초하여 메이크업의 주제를 결정할 수 있다. 예를 들어, 메이크업 주제 획득부(111)는 데이터의 양이 많은 순으로 3가지 메이크업 스타일을 획득하여, 획득된 3개의 메이크업 스타일을 메이크업 주제로 결정할 수 있다.In detail, the makeup theme acquiring unit 111 may acquire beauty-related data online. Beauty-related data may include makeup related search terms, uploaded makeup related content, sales volume of makeup products, and the like. The makeup theme acquisition unit 111 may analyze the obtained beauty-related data for each makeup theme and determine a theme of makeup based on the amount of data. For example, the makeup theme acquisition unit 111 may acquire three makeup styles in the order of increasing amount of data, and determine the obtained three makeup styles as makeup subjects.
이와 같은 방법을 통해, 메이크업 주제 획득부(111)는 최근 유행하는 메이크업의 주제를 용이하게 획득할 수 있는 효과가 있다.Through such a method, the makeup topic acquisition unit 111 has an effect of easily acquiring a topic of makeup that is popular in recent years.
본 발명의 다른 실시 예에 따르면, 메이크업 주제 획득부(111)는 입력 신호를 수신함에 따라 메이크업 주제를 결정할 수 있다. According to another embodiment of the present disclosure, the makeup topic acquisition unit 111 may determine a makeup topic according to an input signal.
구체적으로, 사용자는 임의의 메이크업 주제를 메이크업 서버(100)에 입력할 수 있다. 메이크업 주제 획득부(111)는 입력된 데이터에 대응하는 메이크업 주제를 획득하여, 메이크업 주제를 결정할 수 있다.In detail, the user may input any makeup subject into the makeup server 100. The makeup subject obtaining unit 111 may obtain a makeup subject corresponding to the input data and determine the makeup subject.
이와 같은 방법을 통해, 오프라인 상에서 유행하는 메이크업의 주제를 메이크업 서버(100)에 반영시킬 수 있는 효과가 있다.Through this method, there is an effect that can be reflected in the makeup server 100 the theme of the makeup that is popular on-line.
메이크업 주제 획득부(111)는 제1 또는 제2 실시 예를 통해 적어도 하나 이상의 메이크업 주제를 결정할 수 있다. 예를 들어, 메이크업 주제 획득부(111)는 A 메이크업, B 메이크업 및 C 메이크업을 결정할 수 있다. A 메이크업은 내추럴(natural) 메이크업, B 메이크업은 러블리(lovely) 메이크업, C 메이크업은 스모키(smoky) 메이크업일 수 있으나, 이는 예시적인 것에 불과하다.The makeup subject acquirer 111 may determine at least one makeup subject through the first or second embodiment. For example, the makeup subject acquirer 111 may determine A makeup, B makeup, and C makeup. Makeup A may be a natural makeup, makeup B a lovely makeup, makeup C a smoky makeup, but this is merely exemplary.
앞에서 설명한 메이크업 주제를 결정하는 실시 예는 예시적인 것으로, 메이크업 주제 획득부(111)는 다른 방법을 통해 메이크업 주제를 결정할 수도 있다.The embodiment of determining the makeup subject described above is an example, and the makeup subject obtaining unit 111 may determine the makeup subject through another method.
한편, 메이크업 주제 획득부(111)는 메이크업 주제 획득시 메이크업 기준을 함께 획득할 수 있다. 메이크업 기준은 메이크업 주제별로 메이크업을 구분하는 적어도 하나 이상의 특징을 의미할 수 있다. 메이크업 기준은 이후 이미지 데이터 수신 단계 및 메이크업 분석 단계에서 가이드 라인(guide line)으로 이용될 수 있다.On the other hand, the makeup theme acquisition unit 111 may acquire the makeup criteria when acquiring the makeup theme. The makeup criterion may mean at least one feature that distinguishes makeup by makeup theme. The makeup reference may be used as a guideline in the image data receiving step and the makeup analysis step.
다음으로, 메이크업 이미지 획득부(112)는 결정된 메이크업 주제에 대응하는 이미지 데이터를 수신할 수 있다(S103).Next, the makeup image acquisition unit 112 may receive image data corresponding to the determined makeup subject (S103).
메이크업 이미지 획득부(112)는 외부로부터 메이크업 주제에 대응하는 이미지 데이터를 수신할 수 있다. 예를 들어, 메이크업 이미지 획득부(112)는 외부 저장매체, 이동 단말기 또는 외부 서버 등으로부터 메이크업 주제에 대응하는 이미지 데이터를 수신할 수 있다.The makeup image acquirer 112 may receive image data corresponding to a makeup subject from the outside. For example, the makeup image acquirer 112 may receive image data corresponding to a makeup subject from an external storage medium, a mobile terminal, or an external server.
메이크업 이미지 획득부(112)는 메이크업 주제별로 복수 개의 메이크업 이미지 데이터를 수신할 수 있다. 예를 들어, 메이크업 이미지 획득부(112)는 A 메이크업에 대응하는 이미지 데이터, B 메이크업에 대응하는 이미지 데이터 및 C 메이크업에 대응하는 이미지 데이터를 수신할 수 있다.The makeup image acquisition unit 112 may receive a plurality of makeup image data for each makeup subject. For example, the makeup image acquisition unit 112 may receive image data corresponding to A makeup, image data corresponding to B makeup, and image data corresponding to C makeup.
한편, 메이크업 이미지 획득부(112)는 이미지 데이터를 그룹에 따라 분류하여 수신할 수도 있다. 즉, 메이크업 이미지 획득부(112)는 복수 개로 분류되는 각 그룹으로부터 메이크업 주제별 이미지 데이터를 수신할 수 있다.Meanwhile, the makeup image acquisition unit 112 may receive image data classified according to a group. That is, the makeup image acquisition unit 112 may receive image data for each makeup subject from each of the plurality of groups.
다음으로, 도 5는 본 발명의 실시 예에 따른 메이크업 서버가 이미지 데이터를 수신하는 방법을 설명하기 위한 예시 도면이다.Next, FIG. 5 is an exemplary view for explaining a method of receiving image data by the makeup server according to an exemplary embodiment of the present invention.
도 5에 도시된 이미지 분류 표(500)는 메이크업 이미지 획득부(112)를 통해 수신된 이미지 데이터의 분포를 나타낸다. A1 내지 A5, B1 내지 B5, C1 내지 C5 및 D1 내지 D5는 수신된 이미지 데이터들의 집합을 의미한다.The image classification table 500 illustrated in FIG. 5 represents a distribution of image data received through the makeup image acquisition unit 112. A1 to A5, B1 to B5, C1 to C5, and D1 to D5 mean a collection of received image data.
특히, A1, A2, B1, B2, C1, C2, D1 및 D2는 제1 그룹에 대응하는 이미지 데이터들의 집합으로, 메이크업 평가 DB의 하위권(하, 중하)을 구성하기 위해 수신된 이미지 데이터들의 집합이다. In particular, A1, A2, B1, B2, C1, C2, D1, and D2 are sets of image data corresponding to the first group, and are a set of image data received to constitute a lower right (lower, lower) of the makeup evaluation DB. to be.
A3, A4, B3, B4, C3, C4, D3 및 D4는 제2 그룹에 대응하는 이미지 데이터들의 집합으로, 메이크업 평가 DB의 중위권(중, 중상)을 구성하기 위해 수신된 이미지 데이터들의 집합이다. A3, A4, B3, B4, C3, C4, D3, and D4 are sets of image data corresponding to the second group, and are sets of image data received to constitute a middle right (middle and middle injury) of the makeup evaluation DB.
A5, B5, C5, D5는 제3 그룹에 대응하는 이미지 데이터들의 집합으로, 메이크업 평가 DB의 상위권(상)을 구성하기 위해 수신된 이미지 데이터들의 집합이다. A5, B5, C5, and D5 are sets of image data corresponding to the third group, and are sets of image data received to form a top (up) of the makeup evaluation DB.
이와 같이, 메이크업 이미지 획득부(112)는 점수 대를 달리하는 제1 내지 제3 그룹에 포함된 사람들 각각의 A 메이크업 이미지, B 메이크업 이미지, C 메이크업 이미지 및 노 메이크업 이미지를 획득할 수 있다. A 메이크업 이미지, B 메이크업 이미지, C 메이크업 이미지는 각각 메이크업 주제별 평가의 기초가 되는 이미지 데이터이다. 노메이크업 이미지는 메이크업 분석 단계에서 최저점 처리에 이용되는 이미지 데이터일 수 있다.As such, the makeup image acquisition unit 112 may acquire an A makeup image, a B makeup image, a C makeup image, and a no makeup image of each of the people included in the first to third groups having different scores. The A make-up image, the B make-up image, and the C make-up image are image data which are the basis of evaluation of each makeup subject. The no makeup image may be image data used for the lowest point processing in the makeup analysis step.
제1 그룹은 일반인을 나타내고, 제2 그룹은 뷰티 관련 업계 사람을 나타내고, 제3 그룹은 메이크업 전문가를 나타낼 수 있으나, 이는 예시적인 것에 불과하다.The first group may represent the public, the second group may represent beauty-related people, and the third group may represent makeup professionals, but this is merely illustrative.
이와 같이, 이미지 데이터를 메이크업 주제별 및 그룹에 따라 분류하여 수신하는 경우 메이크업 분석을 보다 정확하게 할 수 있는 효과가 있다. 또한, 메이크업 분석에 영향을 미칠 수 있는 통제 변인인 사진 각도, 조명 등을 고려하여 정확하게 메이크업을 분석하여 평가할 수 있는 가이드 라인을 제공하는 효과가 있다.As such, when image data is classified and received according to makeup subjects and groups, makeup analysis may be more accurately performed. In addition, there is an effect of providing a guideline for accurately analyzing and evaluating makeup by taking into account photographic angles and lighting, which are control variables that may affect makeup analysis.
한편, 메이크업 이미지 획득부(112)는 A 메이크업, B 메이크업 및 C 메이크업 이외의 메이크업 이미지를 수신할 수 있다. 즉, 메이크업 이미지 획득부(112)는 결정된 메이크업 주제와 상이한 주제의 메이크업 이미지를 획득할 수 있다. 이는, 메이크업의 완성도와 관계 없이 주제를 벗어나거나 발란스(balance)가 깨지는 메이크업을 0점 처리하기 위한 이미지 데이터이다.Meanwhile, the makeup image acquirer 112 may receive makeup images other than A makeup, B makeup, and C makeup. That is, the makeup image acquisition unit 112 may acquire a makeup image having a different theme from the determined makeup theme. This is image data for 0-point processing of makeup that is off the subject or balance is broken regardless of the degree of completeness of makeup.
다시 도 4를 설명한다.4 will be described again.
점수 데이터 생성부(113)는 수신된 이미지 데이터에 기초하여 메이크업 점수 데이터를 생성할 수 있다(S105).The score data generator 113 may generate makeup score data based on the received image data (S105).
점수 데이터 생성부(113)는 수신된 이미지 데이터 및 각각의 이미지 데이터에 대응하는 점수를 포함한 점수 데이터를 생성할 수 있다. 즉, 점수 데이터 생성부(113)는 이미지 데이터를 점수 데이터화할 수 있다.The score data generator 113 may generate score data including the received image data and scores corresponding to the respective image data. In other words, the score data generator 113 may convert the image data into score data.
다음으로 도 6은 본 발명의 제1 실시 예에 따른 메이크업 점수 데이터를 설명하기 위한 예시 도면이다.6 is an exemplary diagram for describing makeup score data according to a first embodiment of the present invention.
도 6에 도시된 메이크업 점수 데이터(600)는 이미지 데이터 및 각각의 이미지 데이터에 대응하는 점수를 포함한다. 이미지 데이터에 대응하는 점수는 영역에 따라 세분화될 수 있다. 예를 들어, 도 6에 도시된 바와 같이, 각 이미지 데이터는 베이스(base) 영역, 아이브로우(eyebrow) 영역, 아이(eye) 영역, 립(lip) 영역, 블러셔 및 섀딩(blush & shading) 영역 별 점수를 포함할 수 있다. 그러나, 각 영역은 예시적인 것에 불과하다.The makeup score data 600 shown in FIG. 6 includes image data and a score corresponding to each image data. Scores corresponding to the image data may be broken down by region. For example, as shown in FIG. 6, each image data includes a base area, an eyebrow area, an eye area, a lip area, a blusher and a shading area. It may include a star score. However, each area is merely exemplary.
도 6에 도시된 메이크업 점수 데이터(600)는 A 메이크업에 대응하는 2개의 이미지 데이터 및 이에 대응하는 점수만이 도시되어 있으나, 점수 데이터 생성부(113)는 단계 S103에서 수신한 모든 이미지 데이터를 포함하는 점수 데이터를 생성한다.The makeup score data 600 illustrated in FIG. 6 shows only two image data corresponding to A makeup and a score corresponding thereto, but the score data generator 113 includes all image data received in step S103. Generate score data.
점수 데이터 생성부(113)는 각 이미지 데이터에 대응하는 영역별 점수 입력을 수신함에 따라 점수 데이터를 생성할 수 있다. 예를 들어, 적어도 하나 이상의 메이크업 전문가는 이미지 데이터의 영역 별 점수를 메이크업 서버(100)에 입력할 수 있다. 점수 데이터 생성부(113)는 메이크업 서버(100)에 입력되는 점수 데이터에 기초하여 메이크업 점수 데이터를 생성할 수 있다. 도 6에 도시된 메이크업 점수 데이터는 5, 3.5, 4 등의 수치로 표시되었으나 이는 예시적인 것에 불과하며, 메이크업 점수 데이터는 상, 중상, 중, 중하, 하, 등으로 구분될 수도 있다.The score data generator 113 may generate score data according to the score input for each region corresponding to each image data. For example, the at least one makeup expert may input a score for each area of the image data to the makeup server 100. The score data generator 113 may generate makeup score data based on the score data input to the makeup server 100. The make-up score data shown in FIG. 6 is represented by 5, 3.5, 4, etc., but this is merely exemplary, and the make-up score data may be divided into upper, middle, middle, middle, lower, and the like.
메이크업 점수 데이터(600)는 메이크업 주제 별 메이크업의 특징을 나타낸다. 구체적으로, 메이크업 점수 데이터(600)는 동일한 메이크업 이미지일지라도 메이크업의 주제에 따라 상이한 점수를 나타낸다. 또한, 동일한 메이크업이 입혀진 경우라도 이미지에 포함된 얼굴형, 눈매 등에 따라 점수가 달라진다. 따라서, 메이크업 점수 데이터(600)는 복수 개의 메이크업 이미지 데이터를 포함하고, 메이크업 주제에 따라 구별되는 영역별 점수를 포함한다.The makeup score data 600 represents makeup features by makeup subject. Specifically, the makeup score data 600 represents different scores depending on the subject of makeup, even if the same makeup image. In addition, even when the same makeup is applied, the score varies depending on the face shape, eyes, and the like included in the image. Therefore, the makeup score data 600 includes a plurality of makeup image data, and includes a score for each region distinguished according to the makeup subject.
메이크업 점수 데이터(600)를 이용하면 메이크업 주제와 관련이 없는 메이크업 이미지 데이터의 경우 이에 대응하는 점수는 0점으로 산출될 수 있다.If the makeup score data 600 is used, the score corresponding to the makeup image data not related to the makeup subject may be calculated as 0.
이와 같이, 메이크업 이미지를 수신하여 점수 데이터화하면 전문가의 평가를 바탕으로 하는 점수 데이터를 생성할 수 있다. 이에 따라, 메이크업 평가에 대한 신뢰성이 높아진다. 또한, 메이크업 관련 빅데이터가 구축될 수 있다.As such, when the makeup image is received and score data is generated, the score data may be generated based on the evaluation of the expert. This increases the reliability of the makeup evaluation. In addition, makeup-related big data can be built.
다시, 도 4를 설명한다.4 will be described again.
점수 데이터 생성부(113)는 메이크업 점수 데이터를 튜닝할 수 있다(S107).The score data generator 113 may tune the makeup score data (S107).
점수 데이터 생성부(113)는 생성된 메이크업 점수 데이터의 신뢰도를 향상시키기 위해 메이크업 점수 데이터를 튜닝할 수 있다. 구체적으로, 점수 데이터 생성부(113)는 복수 개의 얼굴 이미지가 촬영 각도 또는 조명이 상이하게 촬영된 경우 동일한 메이크업 점수를 산출하도록 점수 데이터를 튜닝할 수 있다. 메이크업 점수 데이터를 튜닝하는 방법은 아래와 같을 수 있다.The score data generator 113 may tune the makeup score data to improve the reliability of the generated makeup score data. In detail, the score data generating unit 113 may tune the score data to calculate the same makeup score when the plurality of face images are photographed at different photographing angles or lights. How to tune the makeup score data may be as follows.
앞서 단계 S103에서 수신한 제1 이미지 데이터와 관련하여, 메이크업 이미지 획득부(112)는 제1 이미지 데이터에 대응하는 제2 이미지 데이터를 재수신할 수 있다. 제2 이미지 데이터는 제1 이미지 데이터와는 다른 이미지 데이터로, 새로 제작된 메이크업 이미지 및 새로 촬영된 노메이크업 이미지를 포함할 수 있다. 즉, 제2 이미지 데이터는 제1 이미지 데이터와 동일인이 수행한 메이크업인 점에서 동일하나, 촬영 각도 또는 조명 등에 의해 상이한 메이크업으로 인식될 수 있는 이미지 데이터를 의미할 수 있다.In relation to the first image data received in step S103, the makeup image acquisition unit 112 may re-receive second image data corresponding to the first image data. The second image data is image data different from the first image data, and may include a newly created makeup image and a newly photographed no makeup image. That is, the second image data may refer to image data that is the same as that of the makeup performed by the same person as the first image data, but may be recognized as different makeup by a photographing angle or lighting.
따라서, 점수 데이터 생성부(113)는 제1 이미지 데이터에 의해 산출되는 점수와 제2 이미지 데이터에 의해 산출되는 점수가 동일하도록 메이크업 점수 데이터를 튜닝할 수 있다.Therefore, the score data generator 113 may tune the makeup score data such that the score calculated by the first image data and the score calculated by the second image data are the same.
다음으로, 도 7 내지 도 8을 참조하여, 메이크업 점수 데이터를 튜닝하는 동작을 예로 들어 설명한다.Next, referring to FIGS. 7 to 8, an operation of tuning makeup score data will be described as an example.
도 7 내지 도 8은 본 발명의 제1 실시 예에 따른 메이크업 점수 데이터를 튜닝하는 방법을 설명하기 위한 예시 도면이다.7 to 8 are exemplary diagrams for describing a method of tuning makeup score data according to a first embodiment of the present invention.
먼저, 도 7을 참조하면 제1 이미지 데이터(71)와 제2 이미지 데이터(72)는 동일하게 메이크업된 이미지이다. 그러나, 제1 이미지 데이터(71)와 제2 이미지 데이터(72)는 상이한 촬영 각도로 촬영된 경우이다. 점수 데이터 생성부(113)는 제1 이미지 데이터(71)에 의해 산출되는 점수와 제2 이미지 데이터(72)에 의해 산출되는 점수가 동일하도록 메이크업 점수 데이터를 튜닝할 수 있다.First, referring to FIG. 7, the first image data 71 and the second image data 72 are identically makeup images. However, the first image data 71 and the second image data 72 are photographed at different photographing angles. The score data generator 113 may tune the makeup score data such that the score calculated by the first image data 71 and the score calculated by the second image data 72 are the same.
예를 들어, 점수 데이터 생성부(113)는 제1 이미지 데이터(71)와 제2 이미지 데이터(72)를 하나의 그룹으로 형성하여, 동일 점수가 산출되도록 튜닝할 수 있다. 또는, 점수 데이터 생성부(113)는 이미지 데이터 튜닝을 통해 제1 이미지 데이터(71)에 의해 산출되는 점수와 제2 이미지 데이터(72)에 의해 산출되는 점수가 동일하도록 조절할 수 있다.For example, the score data generator 113 may form the first image data 71 and the second image data 72 as a group and tune the same score to be calculated. Alternatively, the score data generator 113 may adjust the score calculated by the first image data 71 and the score calculated by the second image data 72 to be the same through image data tuning.
다음으로, 도 8을 참조하면 제1 이미지 데이터(81)는 메이크업 이미지 데이터이고, 제2 이미지 데이터(82)는 동일인의 노 메이크업 이미지 데이터이다. 이 경우, 점수 데이터 생성부(113)는 제2 이미지 데이터(82)에 대해 산출되는 점수가 최저점이 되도록 메이크업 점수 데이터를 튜닝할 수 있다.Next, referring to FIG. 8, the first image data 81 is makeup image data, and the second image data 82 is no makeup image data of the same person. In this case, the score data generator 113 may tune the makeup score data such that the score calculated for the second image data 82 is the lowest point.
구체적으로, 점수 데이터 생성부(113)는 제1 이미지 데이터(81)와 제2 이미지 데이터(82)의 메이크업을 각각 인식할 수 있다. 점수 데이터 생성부(113)는 제1 이미지 데이터(81)와 같이 메이크업이 인식되는 경우, 메이크업 점수 데이터(600)에 기초하여 인식된 메이크업의 점수를 산출할 수 있다. 반면에, 점수 데이터 생성부(113)는 제2 이미지 데이터(82)와 같이 메이크업이 인식되지 않는 경우, 최저점을 산출하도록 메이크업 점수 데이터를 튜닝할 수 있다.In detail, the score data generator 113 may recognize makeup of the first image data 81 and the second image data 82, respectively. When the makeup is recognized like the first image data 81, the score data generator 113 may calculate the score of the recognized makeup based on the makeup score data 600. On the other hand, if the makeup is not recognized like the second image data 82, the score data generator 113 may tune the makeup score data to calculate the lowest point.
이와 같이, 점수 데이터 생성부(113)가 이미지 데이터와 촬영 각도, 촬영 당시의 조명, 노 메이크업의 식별 등이 반영되도록 메이크업 점수 데이터를 튜닝함으로써, 메이크업 평가 시스템의 품질을 향상시킬 수 있는 효과가 있다.As described above, the score data generator 113 tunes the makeup score data to reflect the image data, the shooting angle, the illumination at the time of shooting, the identification of the no makeup, and the like, thereby improving the quality of the makeup evaluation system. .
다시 도 4를 설명한다.4 will be described again.
점수 데이터 생성부(113)는 메이크업 점수 데이터의 신뢰도를 보정할 수 있다(S109).The score data generator 113 may correct the reliability of the makeup score data (S109).
제어부(130)는 메이크업 점수 데이터에 포함되지 않은 새로운 이미지 데이터를 수신할 수 있다. 점수 데이터 생성부(113)는 수신된 새로운 이미지 데이터에 대한 점수가 메이크업 점수 데이터에 기초하여 오류 없이 산출되는지 판단할 수 있다. 이하, 메이크업 점수 데이터의 신뢰도를 보정하는 구체적인 방법을 설명한다.The controller 130 may receive new image data not included in the makeup score data. The score data generator 113 may determine whether the score for the received new image data is calculated without error based on the makeup score data. Hereinafter, a specific method of correcting the reliability of makeup score data will be described.
본 발명의 일 실시 예에 따르면, 점수 데이터 생성부(113)는 새로운 이미지에 대응하는 메이크업 점수를 산출하고, 새로운 이미지와 유사한 이미지를 메이크업 점수 데이터에서 획득할 수 있다. According to an embodiment of the present disclosure, the score data generator 113 may calculate a makeup score corresponding to the new image, and obtain an image similar to the new image from the makeup score data.
점수 데이터 생성부(113)는 새로운 이미지에 대응하는 점수가 메이크업 점수 데이터에 포함된 유사 이미지의 점수와 기 설정된 오차율 이내로 산출되는지 판단할 수 있다. 점수 데이터 생성부(113)는 판단 결과에 기초하여, 메이크업 점수 데이터에 포함된 관련 이미지 데이터의 점수를 보정할 수 있다.The score data generator 113 may determine whether the score corresponding to the new image is calculated within the preset error rate and the score of the similar image included in the makeup score data. The score data generator 113 may correct the score of the related image data included in the makeup score data based on the determination result.
또는, 본 발명의 다른 실시 예에 따르면, 점수 데이터 생성부(113)는 메이크업 점수 데이터에 기초하여 산출된 새로운 이미지에 대응하는 메이크업 점수인 제1 점수를 획득할 수 있다. 또한, 점수 데이터 생성부(113)는 메이크업 전문가의 메이크업 평가를 위한 입력에 기초한 새로운 이미지에 대응하는 메이크업 점수인 제2 점수를 획득할 수 있다.Alternatively, according to another embodiment of the present disclosure, the score data generator 113 may obtain a first score, which is a makeup score corresponding to the new image calculated based on the makeup score data. In addition, the score data generator 113 may obtain a second score that is a makeup score corresponding to the new image based on an input for makeup evaluation of the makeup expert.
점수 데이터 생성부(113)는 동일한 이미지와 관련하여 획득된 제1 점수와 제2 점수를 비교할 수 있다. 점수 데이터 생성부(113)는 비교 결과 제1 점수와 제2 점수가 기 설정된 범위 이상으로 상이한지 판단할 수 있다.The score data generator 113 may compare the first score and the second score acquired with respect to the same image. The score data generator 113 may determine whether the first score and the second score differ from each other by more than a predetermined range as a result of the comparison.
점수 데이터 생성부(113)는 제1 점수와 제2 점수와 기 설정된 범위 이상으로 상이한 경우 메이크업 전문가로부터 비교 결과에 대응하는 피드백을 수신할 수 있다.The score data generator 113 may receive feedback corresponding to the comparison result from the makeup expert when the score data generator 113 is different from the first score and the second score by more than a predetermined range.
피드백은 제1 점수와 제2 점수간의 격차 이유를 포함할 수 있다. 예를 들어, 피드백은 제1 점수 또는 제2 점수 수정하기 위한 정보, 이미지 인식 정보 또는 메이크업 전문가의 의견 정보를 포함할 수 있다.The feedback may include the reason for the gap between the first score and the second score. For example, the feedback may include information for correcting the first score or the second score, image recognition information, or opinion information of the makeup expert.
점수 데이터 생성부(113)는 수신된 피드백에 기초하여 메이크업 점수 데이터에 포함된 이미지 데이터의 점수를 보정할 수 있다.The score data generator 113 may correct the score of the image data included in the makeup score data based on the received feedback.
메이크업 점수의 신뢰도를 보정하기 위한 방법은 앞에서 예시로 든 제1 내지 제2 실시 예외에 다른 방법을 더 포함할 수 있다.The method for correcting the reliability of the makeup score may further include a method other than the first to second embodiment exceptions exemplified above.
본 발명은 이와 같이 메이크업 점수를 보정함으로써 메이크업 점수 데이터의 오차율을 낮추고, 신뢰도를 향상시킬 수 있는 효과가 있다.The present invention has the effect of lowering the error rate of the makeup score data by improving the makeup score, thus improving the reliability.
제어부(130)는 메이크업 점수 데이터를 저장할 수 있다(S111).The controller 130 may store makeup score data (S111).
제어부(130)는 생성된 메이크업 점수 데이터를 저장할 수 있다. 또한, 튜닝되거나 신뢰도 보정된 메이크업 점수 데이터를 저장할 수 있다.The controller 130 may store the generated makeup score data. It is also possible to store tuned or reliability corrected makeup score data.
먼저, 본 발명의 일 실시 예에 따른 메이크업 평가 데이터 베이스를 설명한다.First, the makeup evaluation database according to an embodiment of the present invention will be described.
본 발명의 일 실시 예에 따른 메이크업 평가 데이터 베이스는 도 6을 통해 설명한 바와 같이, 얼굴 이미지 별로 메이크업 점수 데이터를 저장할 수 있다. 이에 따라, 메이크업 평가 데이터 베이스는 메이크업 얼굴 이미지와 이에 대응하는 영역별 점수가 정렬되도록 형성될 수 있다.As described with reference to FIG. 6, the makeup evaluation database according to an embodiment of the present invention may store makeup score data for each face image. Accordingly, the makeup evaluation database may be formed such that the makeup face image and scores corresponding to regions corresponding thereto are aligned.
다음으로 도 9를 참조하여 본 발명의 다른 실시 예에 따른 메이크업 평가 데이터 베이스를 설명한다. Next, a makeup evaluation database according to another embodiment of the present invention will be described with reference to FIG. 9.
본 발명의 다른 실시 예에 따른 메이크업 평가 데이터 베이스는 도 9에 도시된 바와 같이 얼굴 영역과 점수를 구분하여 부분 이미지가 정렬되도록 저장할 수 있다. 이에 따라, 메이크업 평가 데이터 베이스는 얼굴 영역을 구분하고, 얼굴 영역 별로 점수를 세분화하고, 세분화된 얼굴 영역의 점수에 부분 이미지들이 배열되도록 형성될 수 있다.As shown in FIG. 9, the makeup evaluation database according to another embodiment of the present invention may store the partial images to be aligned by dividing a face region and a score. Accordingly, the makeup evaluation database may be formed to classify face areas, subdivide scores by face areas, and arrange partial images on scores of the divided face areas.
앞에서 설명한 메이크업 평가 데이터 베이스는 예시적인 것으로, 이와 다른 형태로 생성될 수 있다.The makeup evaluation database described above is an example, and may be generated in a different form.
다시 도 4를 설명한다.4 will be described again.
메이크업 서버(100)는 메이크업 평가 데이터 베이스를 생성함에 따라 메이크업 이미지를 평가할 수 있다.The makeup server 100 may evaluate a makeup image by generating a makeup evaluation database.
먼저, 이동 단말기(10)는 어플리케이션 서버(200)로 얼굴 이미지를 전송하고(S113), 어플리케이션 서버(200)는 이동 단말기(10)로부터 수신된 얼굴 이미지를 메이크업 서버(100)로 전송할 수 있다(S113).First, the mobile terminal 10 may transmit a face image to the application server 200 (S113), and the application server 200 may transmit a face image received from the mobile terminal 10 to the makeup server 100 ( S113).
메이크업 서버(100)의 무선 통신부(140)는 어플리케이션 서버(200)로부터 얼굴 이미지를 수신할 수 있다.The wireless communication unit 140 of the makeup server 100 may receive a face image from the application server 200.
이동 단말기(10)는 메이크업 평가 어플리케이션을 통해 어플리케이션 서버(200)로 메이크업 평가 요청 신호를 전송할 수 있다. 이동 단말기(10)는 메이크업 평가 요청 신호를 전송하기 위한 화면을 표시할 수 있다.The mobile terminal 10 may transmit a makeup evaluation request signal to the application server 200 through the makeup evaluation application. The mobile terminal 10 may display a screen for transmitting a makeup evaluation request signal.
도 10은 본 발명의 제1 실시 예에 따른 메이크업 평가 요청 신호를 전송하기 위한 화면을 설명하기 위한 예시 도면이다.10 is an exemplary view for explaining a screen for transmitting a makeup evaluation request signal according to a first embodiment of the present invention.
이동 단말기(10)의 디스플레이부(14)는 도 10에 도시된 바와 같은 메이크업 평가 요청 화면을 표시할 수 있다.The display unit 14 of the mobile terminal 10 may display a makeup evaluation request screen as shown in FIG. 10.
도 10을 참조하면, 메이크업 평가 요청 화면은 메이크업 주제 항목(1001), 얼굴 이미지 선택 아이콘(1002), 얼굴 이미지 윈도우(1003) 및 평가 아이콘(1004)을 포함할 수 있다.Referring to FIG. 10, the makeup evaluation request screen may include a makeup subject item 1001, a face image selection icon 1002, a face image window 1003, and an evaluation icon 1004.
메이크업 주제 항목(1001)은 메이크업 주제를 선택하기 위한 항목이다. 메이크업 평가는 메이크업 주제에 따라 달라질 수 있다. 따라서, 이동 단말기(10)의 제어부(17)는 메이크업 주제 항목(1001)을 통해 메이크업 주제를 설정할 수 있다.The makeup subject item 1001 is an item for selecting a makeup subject. Makeup assessments may vary depending on makeup subjects. Accordingly, the controller 17 of the mobile terminal 10 may set a makeup subject through the makeup subject item 1001.
얼굴 이미지 선택 아이콘(1002)은 메이크업 평가를 요청하고자 하는 얼굴 이미지를 선택하기 위한 항목이다. 본 발명의 일 실시 예에 따르면, 이동 단말기(10)의 제어부(17)는 얼굴 이미지 선택 아이콘(1002)을 선택하는 명령을 수신하면 카메라(13)를 이용한 촬영 화면을 표시하도록 디스플레이부(14)를 제어할 수 있다. 카메라(13)는 메이크업한 얼굴을 촬영할 수 있다. 제어부(17)는 촬영된 얼굴 이미지를 얼굴 이미지 윈도우(1003)에 표시할 수 있다.The face image selection icon 1002 is an item for selecting a face image to request makeup evaluation. According to an embodiment of the present disclosure, when the control unit 17 of the mobile terminal 10 receives a command for selecting a face image selection icon 1002, the display unit 14 displays a photographing screen using the camera 13. Can be controlled. The camera 13 may photograph the makeup face. The controller 17 may display the photographed face image in the face image window 1003.
본 발명의 다른 실시 예에 따르면, 이동 단말기(10)의 제어부(17)는 얼굴 이미지 선택 아이콘(1002)을 선택하는 명령을 수신하면 메모리(15)에 저장된 적어도 하나 이상의 정지 이미지를 표시할 수 있다. 또는, 제어부(17)는 메모리(15)에 저장된 정지 이미지에서 얼굴을 포함하는 이미지를 식별하여, 적어도 하나 이상의 얼굴을 포함하는 정지 이미지를 표시할 수 있다. 제어부(17)는 디스플레이부(14)에 표시된 적어도 하나 이상의 정지 이미지에서 어느 하나의 얼굴 이미지를 선택하는 명령을 수신할 수 있다. 제어부(17)는 선택된 얼굴 이미지를 얼굴 이미지 윈도우(1003)에 표시할 수 있다.According to another embodiment of the present disclosure, the controller 17 of the mobile terminal 10 may display at least one or more still images stored in the memory 15 when a command for selecting the face image selection icon 1002 is received. . Alternatively, the controller 17 may identify an image including a face from the still image stored in the memory 15 and display a still image including at least one face. The controller 17 may receive a command for selecting one face image from at least one still image displayed on the display 14. The controller 17 may display the selected face image in the face image window 1003.
얼굴 이미지 윈도우(1003)는 메이크업 평가를 요청하고자 하는 얼굴 이미지를 미리 보여주는 윈도우이다. 카메라(13)를 통해 촬영되거나 메모리(15)에 저장된 어느 하나의 얼굴 이미지가 얼굴 이미지 윈도우(1003)에 표시될 수 있다. 사용자는 얼굴 이미지 윈도우(1003)를 통해 메이크업 평가를 요청하고자 하는 얼굴이 맞는지 확인할 수 있다.The face image window 1003 is a window showing a face image to request makeup evaluation in advance. Any face image captured by the camera 13 or stored in the memory 15 may be displayed in the face image window 1003. The user may check whether the face to request makeup evaluation is correct through the face image window 1003.
평가 아이콘(1004)은 메이크업 평가 요청을 실행하기 위한 아이콘이다. 제어부(17)는 평가 아이콘(1004)을 선택하는 명령을 수신하면, 어플리케이션 서버(200)로 메이크업 평가 요청 신호를 전송할 수 있다. 즉, 제어부(17)는 얼굴 이미지를 포함하는 메이크업 평가 요청 신호를 어플리케이션 서버(200)로 전송하도록 무선 통신부(11)를 제어할 수 있다.The evaluation icon 1004 is an icon for executing a makeup evaluation request. When the controller 17 receives a command for selecting the evaluation icon 1004, the controller 17 may transmit a makeup evaluation request signal to the application server 200. That is, the controller 17 may control the wireless communication unit 11 to transmit the makeup evaluation request signal including the face image to the application server 200.
어플리케이션 서버(200)는 메이크업 서버(100)로 메이크업 평가 요청 신호를 전송할 수 있다. 이에 따라, 메이크업 서버(100)의 무선 통신부(140)는 얼굴 이미지를 포함한 메이크업 요청 신호를 어플리케이션 서버(200) 로부터 수신할 수 있다.The application server 200 may transmit a makeup evaluation request signal to the makeup server 100. Accordingly, the wireless communication unit 140 of the makeup server 100 may receive a makeup request signal including a face image from the application server 200.
실시 예에 따라서, 이동 단말기(10)는 메이크업 서버(100)로 직접 메이크업 평가 요청 신호를 전송할 수도 있다.According to an embodiment, the mobile terminal 10 may transmit a makeup evaluation request signal directly to the makeup server 100.
메이크업 평가 요청 신호는 얼굴 이미지 외에 메이크업 주제, 사용자 정보 등을 더 포함할 수 있다.The makeup evaluation request signal may further include a makeup subject, user information, and the like in addition to the face image.
다시 도 4를 설명한다.4 will be described again.
메이크업 서버(100)의 메이크업 분석부(122)는 수신된 얼굴 이미지의 메이크업을 분석할 수 있다(S115).The makeup analyzer 122 of the makeup server 100 may analyze makeup of the received face image (S115).
얼굴 이미지 획득부(121)는 무선 통신부(140)로부터 얼굴 이미지를 수신할 수 있다. 메이크업 분석부(122)는 얼굴 이미지 획득부(121)가 수신한 얼굴 이미지의 메이크업을 분석할 수 있다.The face image acquirer 121 may receive a face image from the wireless communication unit 140. The makeup analyzer 122 may analyze the makeup of the face image received by the face image acquirer 121.
다음으로 도 11은 본 발명의 제1 실시 예에 따른 메이크업 서버가 얼굴 이미지를 분석하는 방법을 설명하기 위한 예시 도면이다.Next, FIG. 11 is an exemplary diagram for describing a method of analyzing a face image by a makeup server according to the first exemplary embodiment of the present invention.
메이크업 분석부(122)는 메이크업을 분석하기 위해 얼굴의 각 영역을 검출할 수 있다. 구체적으로, 본 발명의 일 실시 예에 따르면, 메이크업 분석부(122)는 수신된 얼굴 이미지를 전처리할 수 있다. 메이크업 분석부(122)는 전처리된 얼굴 이미지를 복수 개의 영역으로 분할하고, 분할된 각 영역들을 기 저장된 얼굴 영역 이미지와 비교하여 눈, 코, 입 등을 검출할 수 있다.The makeup analyzer 122 may detect each area of the face in order to analyze makeup. Specifically, according to an embodiment of the present invention, the makeup analyzer 122 may preprocess the received face image. The makeup analyzer 122 may divide the preprocessed face image into a plurality of regions, and detect the eyes, the nose, the mouth, and the like by comparing the divided regions with previously stored face region images.
본 발명의 다른 실시 예에 따르면, 메이크업 분석부(122)는 pre-trained model(사전 훈련 모델)을 이용하여 얼굴의 각 영역을 인식할 수 있다. pre-trained model은 기존의 합성곱 신경망(Convolutional Neural Network, CNN) 모델의 일부를 활용한 것이다. pre-trained model을 얼굴 사진 학습에 이용하여 눈, 코, 입 등을 인식하도록 할 수 있다. pre-trained model을 이용하면, 분석에 쓸 데이터의 양이 적은 경우 생길 수 있는 과적합(overfitting)의 문제를 완화할 수 있다.According to another embodiment of the present invention, the makeup analyzer 122 may recognize each area of the face by using a pre-trained model. The pre-trained model takes advantage of some of the existing Convolutional Neural Network (CNN) models. A pre-trained model can be used to train face photography to recognize eyes, nose and mouth. By using a pre-trained model, you can alleviate the problem of overfitting that can occur when you have a small amount of data to analyze.
도 11에 도시된 바와 같이, 메이크업 분석부(122)는 얼굴 이미지(1100)의 눈 영역(1151, 1152), 코 영역(1153) 및 입 영역(1154, 1155)을 인식할 수 있다. 메이크업 분석부(122)는 인식된 눈 영역, 코 영역 및 입 영역에 기초하여 메이크업 평가 영역을 분석할 수 있다. 즉, 메이크업 분석부(122)는 얼굴 이미지(1100)의 베이스, 아이브로우, 아이, 립, 블러셔 및 섀딩 메이크업을 분석할 수 있다.As illustrated in FIG. 11, the makeup analyzer 122 may recognize the eye regions 1151 and 1152, the nose region 1153, and the mouth regions 1154 and 1155 of the face image 1100. The makeup analyzer 122 may analyze the makeup evaluation area based on the recognized eye area, nose area, and mouth area. That is, the makeup analyzer 122 may analyze the base, eyebrow, eye, lip, blusher, and shading makeup of the face image 1100.
다음으로, 도 12는 본 발명의 제1 실시 예에 따른 메이크업 분석부가 얼굴 이미지의 메이크업을 분석하는 방법을 설명하기 위한 예시 도면이다.Next, FIG. 12 is an exemplary diagram for describing a method of analyzing a makeup of a face image by a makeup analyzer according to a first embodiment of the present disclosure.
메이크업 분석부(122)는 이동 단말기(10)로부터 수신된 얼굴 이미지를 메이크업 평가 데이터베이스(114)에 포함된 복수 개의 이미지와 비교할 수 있다. 메이크업 분석부(122)는 수신된 얼굴 이미지와 유사한 이미지 데이터를 메이크업 평가 데이터베이스(114)에서 적어도 하나 이상을 추출할 수 있다.The makeup analyzer 122 may compare the face image received from the mobile terminal 10 with a plurality of images included in the makeup evaluation database 114. The makeup analyzer 122 may extract at least one or more image data similar to the received face image from the makeup evaluation database 114.
특히, 메이크업 분석부(122)는 메이크업 영역별로 수신된 얼굴 이미지와 유사한 이미지 데이터를 추출할 수 있다. 구체적으로, 메이크업 분석부(122)는 얼굴 이미지의 베이스와 유사한 베이스를 포함하고 있는 적어도 하나 이상의 이미지 데이터를 추출하고, 얼굴 이미지의 아이브로우와 유사한 아이브로우를 포함하고 있는 적어도 하나 이상의 이미지 데이터를 추출한다.In particular, the makeup analyzer 122 may extract image data similar to the face image received for each makeup area. In detail, the makeup analyzer 122 extracts at least one image data including a base similar to the base of the face image, and extracts at least one image data including an eyebrow similar to the eyebrow of the face image. do.
메이크업 분석부(122)는 추출된 이미지 데이터에 대응하는 점수를 획득하여, 도 12에 도시된 바와 같은 메이크업 분석 그래프(1200)를 생성할 수 있다. 메이크업 분석 그래프(1200)는 추출된 이미지 데이터의 점수 분포를 나타낸다.The makeup analyzer 122 may generate a makeup analysis graph 1200 as shown in FIG. 12 by obtaining a score corresponding to the extracted image data. The makeup analysis graph 1200 represents a score distribution of the extracted image data.
구체적으로, 메이크업 분석부(122)는 베이스가 유사한 것으로 추출된 적어도 하나 이상의 이미지 데이터와 관련하여, 획득된 점수를 베이스 영역(1201)의 점수 영역에 맵핑시킬 수 있다. 마찬가지로, 메이크업 분석부(122)는 아이브로우가 유사한 것으로 추출된 적어도 하나 이상의 이미지 데이터와 관련하여 획득된 점수를 아이브로우 영역(1202)의 점수 영역에 맵핑시킬 수 있다. 메이크업 분석부(122)는 아이 영역(1203), 립 영역(1204), 블러셔 및 섀딩 영역(1205) 모두에 점수를 맵핑시켜 메이크업 분석 그래프(1200)를 생성할 수 있다.In detail, the makeup analyzer 122 may map the obtained score to the score area of the base area 1201 in relation to at least one or more image data extracted as having a similar base. Similarly, the makeup analyzer 122 may map a score obtained in relation to at least one or more image data extracted with similar eyebrows to a score area of the eyebrow area 1202. The makeup analyzer 122 may generate a makeup analysis graph 1200 by mapping scores to all of the eye area 1203, the lip area 1204, the blusher, and the shading area 1205.
메이크업 분석부(122)는 이와 같이 메이크업 분석 그래프(1200)를 생성하여, 얼굴 이미지의 메이크업을 분석할 수 있다. 그러나, 이는 예시적인 것으로 메이크업 분석부(122)는 다른 방법을 통해 메이크업을 분석할 수도 있다.The makeup analyzer 122 may generate the makeup analysis graph 1200 in this way and analyze the makeup of the face image. However, this is exemplary and the makeup analyzer 122 may analyze the makeup through another method.
다시, 도 4를 설명한다.4 will be described again.
메이크업 서버(100)의 메이크업 점수 출력부(123)는 메이크업 분석 결과에 기초하여 얼굴 이미지의 메이크업 점수를 산출할 수 있다(S117).The makeup score output unit 123 of the makeup server 100 may calculate the makeup score of the face image based on the makeup analysis result (S117).
메이크업 점수 출력부(123)는 메이크업 분석 그래프(1200)에 기초하여 얼굴 이미지의 메이크업 점수를 산출할 수 있다.The makeup score output unit 123 may calculate a makeup score of the face image based on the makeup analysis graph 1200.
구체적으로, 메이크업 점수 출력부(123)는 메이크업 분석 그래프(1200)에서 각 영역별로 맵핑된 점수가 가장 많은 점수를 획득하여 산출할 수 있다. In detail, the makeup score output unit 123 may obtain and calculate a score having the largest score mapped to each area in the makeup analysis graph 1200.
예를 들어, 도 12를 참조하면, 메이크업 점수 출력부(123)는 베이스 영역(1201)의 점수를 5점, 아이브로우 영역(1202)의 점수를 9점, 아이 영역(1203)의 점수를 3점, 립 영역(1203)의 점수를 4.5점, 블러셔 및 섀딩 영역(1205)의 점수를 8점으로 획득하여, 산출할 수 있다.For example, referring to FIG. 12, the makeup score output unit 123 scores 5 points of the base area 1201, 9 points of the eyebrow area 1202, and 3 points of the eye area 1203. The score of the point and the lip area 1203 may be obtained by obtaining 4.5 points and the score of the blusher and the shading area 1205 as 8 points.
메이크업 서버(100)의 메이크업 점수 학습부(115)는 얼굴 이미지 및 그에 대응하는 메이크업 점수를 학습할 수 있다(S119).The makeup score learner 115 of the makeup server 100 may learn a face image and a makeup score corresponding thereto (S119).
메이크업 점수 학습부(115)는 얼굴 이미지 및 그에 대응하는 메이크업 점수를 기계 학습할 수 있다. 특히, 메이크업 점수 학습부(115)는 딥러닝(Deep Learning) 기술을 이용하여 메이크업 점수 산출 방법을 학습할 수 있다.The makeup score learner 115 may machine learn a face image and a makeup score corresponding thereto. In particular, the makeup score learner 115 may learn a makeup score calculation method using deep learning technology.
딥러닝 기술은 기계 학습 기술의 일부로, 입력과 출력 사이에 있는 인공 뉴런들을 여러개 층층히 쌓고 연결한 인공신경망 기법을 이용한다.Deep learning technology is part of machine learning technology, which uses artificial neural network technology that stacks and connects artificial neurons between inputs and outputs.
즉, 메이크업 점수 학습부(115)는 기 저장되어 있는 메이크업 점수 데이터 및 산출된 메이크업 점수를 이용하여 메이크업 점수 산출 방법을 기계 학습할 수 있다.That is, the makeup score learner 115 may machine learn the makeup score calculation method using previously stored makeup score data and the calculated makeup score.
특히, 메이크업 점수 학습부(115)는 합성곱 신경망(CNN)을 이용하여 얼굴 이미지 및 그에 대응하는 메이크업 점수를 학습할 수 있다. 합성곱 신경망은 하나 또는 여러 개의 합성곱 계층과 그 위에 올려진 일반적인 인공 신경망 계층들로 이루어져 있으며, 가중치와 통합 계층(pooling layer)들을 추가로 활용한다. 합성곱 신경망은 이러한 구조 덕분에 2차원 구조의 입력 데이터를 충분히 활용할 수 있다.In particular, the makeup score learning unit 115 may learn a face image and a makeup score corresponding thereto using the composite product neural network (CNN). A convolutional neural network consists of one or several convolutional layers and a common artificial neural network layer on top of it, further utilizing weights and pooling layers. This structure allows the convolutional neural network to fully utilize the input data of the two-dimensional structure.
메이크업 점수 학습부(115)는 합성곱 신경망을 이용하여, 기존 메이크업 평가 데이터베이스(114)에 새로 인식된 얼굴 이미지의 특징을 추가하여 메이크업 점수 산출 방법을 기계 학습할 수 있다. The makeup score learning unit 115 may add a feature of a newly recognized face image to the existing makeup evaluation database 114 using a multiplicative neural network to machine learn the makeup score calculation method.
따라서, 메이크업 점수 학습부(115)는 기존 메이크업 전문가의 메이크업 평가를 위한 입력에 기초하여 생성된 메이크업 점수 데이터에 새로 산출한 얼굴 이미지의 점수를 추가하여, 메이크업 점수 산출 방법을 기계 학습할 수 있다.Therefore, the makeup score learner 115 may add a score of a newly calculated face image to the makeup score data generated based on an input for makeup evaluation of an existing makeup expert, and may machine learn the makeup score calculation method.
이와 같이, 메이크업 점수 학습부(115)가 메이크업 점수를 학습하면, 새로운 메이크업 이미지가 주어졌을 때 실제 메이크업 전문가의 평가와 유사하게 평가할 수 있는 효과가 있다. 이를 통해, 사용자에게 보다 신뢰성 있는 메이크업 평가 서비스를 제공할 수 있는 효과가 있다.As such, when the makeup score learning unit 115 learns the makeup score, when a new makeup image is given, the makeup score learning unit 115 may evaluate similarly to the actual makeup expert's evaluation. Through this, there is an effect that can provide a more reliable makeup evaluation service to the user.
메이크업 점수 학습부(115)는 학습한 메이크업 점수를 메이크업 평가 데이터 베이스에 저장하도록 제어할 수 있다.The makeup score learning unit 115 may control to store the learned makeup score in a makeup evaluation database.
다음으로 도 13은 본 발명의 제1 실시 예에 따른 재 생성된 메이크업 평가 데이터 베이스를 설명하기 위한 예시 도면이다.Next, FIG. 13 is an exemplary diagram for describing a recreated makeup evaluation database according to the first embodiment of the present invention.
앞서 설명한 일 실시 예에 따른 메이크업 평가 데이터베이스(114)를 이용하여 재 생성된 메이크업 평가 데이터 베이스를 설명한다.A makeup evaluation database regenerated using the makeup evaluation database 114 according to the above-described embodiment will be described.
도 13에 도시된 바와 같이, 메이크업 평가 데이터베이스(114)는 기존 메이크업 전문가의 평가에 기초한 데이터(1301)에 새로 산출된 얼굴 이미지 점수 데이터(1302)를 더 저장할 수 있다. As shown in FIG. 13, the makeup evaluation database 114 may further store the newly calculated face image score data 1302 in data 1301 based on an existing makeup expert's evaluation.
제어부(130)는 메이크업 평가 데이터베이스(114)에 새로 산출된 얼굴 이미지 점수 데이터를 추가시킴으로써, 보다 객관적인 메이크업 점수를 산출할 수 있다.The controller 130 may calculate a more objective makeup score by adding the newly calculated face image score data to the makeup evaluation database 114.
다시 도 4를 설명한다.4 will be described again.
메이크업 서버(100)의 무선 통신부(140)는 산출된 메이크업 점수를 어플리케이션 서버(200)로 전송하고(S120), 어플리케이션 서버(200)는 수신된 메이크업 점수를 이동 단말기(10)로 전송할 수 있다(S121).The wireless communication unit 140 of the makeup server 100 may transmit the calculated makeup score to the application server 200 (S120), and the application server 200 may transmit the received makeup score to the mobile terminal 10 ( S121).
이동 단말기(10)의 무선 통신부(11)는 메이크업 점수를 어플리케이션 서버(200)로부터 수신할 수 있다. 실시 예에 따라서, 이동 단말기(10)는 메이크업 점수를 메이크업 서버(100)로부터 직접 수신할 수도 있다.The wireless communication unit 11 of the mobile terminal 10 may receive a makeup score from the application server 200. According to an embodiment, the mobile terminal 10 may directly receive the makeup score from the makeup server 100.
이동 단말기(10)의 디스플레이부(14)는 수신된 메이크업 점수를 표시할 수 있다.The display unit 14 of the mobile terminal 10 may display the received makeup score.
다음으로 도 14a 내지 도 14b, 도 15 내지 도 18은 본 발명의 다양한 실시 예에 따른 메이크업 점수를 설명하기 위한 도면이다.Next, FIGS. 14A to 14B and 15 to 18 are diagrams for describing makeup scores according to various embodiments of the present disclosure.
먼저, 도 14a 내지 도 14b는 본 발명의 일 실시 예에 따른 메이크업 점수 화면을 설명하기 위한 예시 도면이다.First, FIGS. 14A to 14B are exemplary diagrams for describing a makeup score screen according to an exemplary embodiment.
디스플레이부(14)는 도 14a 내지 도 14b에 도시된 바와 같이 메이크업 평가 결과 화면을 표시할 수 있다. 구체적으로, 도 14a에 도시된 메이크업 점수 화면은 사용자에 의해 선택된 메이크업 주제로 평가된 메이크업 점수를 나타내고, 도 14b에 도시된 메이크업 점수 화면은 최고 점수로 평가된 메이크업 주제에 따른 메이크업 점수를 나타낸다.The display unit 14 may display a makeup evaluation result screen as illustrated in FIGS. 14A to 14B. In detail, the makeup score screen illustrated in FIG. 14A represents a makeup score evaluated based on a makeup theme selected by the user, and the makeup score screen illustrated in FIG. 14B represents a makeup score according to a makeup theme evaluated as the highest score.
먼저, 메이크업 평가 결과 화면을 구성하는 각각의 구성 요소를 설명한다.First, each component constituting the makeup evaluation result screen will be described.
메이크업 평가 결과 화면은 얼굴 이미지 윈도우(1401), 메이크업 주제 윈도우(1402), 종합 점수 윈도우(1403), 영역별 점수 윈도우(1404) 및 메이크업 재평가 아이콘(1405)을 포함할 수 있다.The makeup evaluation result screen may include a face image window 1401, a makeup subject window 1402, a comprehensive score window 1403, an area score window 1404, and a makeup reevaluation icon 1405.
얼굴 이미지 윈도우(1401)는 메이크업 서버(100)에 의해 분석된 얼굴 이미지를 포함한다. 이를 통해, 사용자는 평가 받고자 의도하였던 얼굴 이미지가 제대로 평가 되었는지 재확인할 수 있다.The face image window 1401 includes a face image analyzed by the makeup server 100. Through this, the user may reconfirm whether the face image intended to be evaluated is properly evaluated.
메이크업 주제 윈도우(1402)는 메이크업을 평가의 바탕이 된 메이크업 주제를 나타낸다. 메이크업 평가는 동일한 얼굴 이미지일지라도 메이크업 주제에 따라 달라질 수 있다. 따라서, 사용자가 메이크업 주제를 올바르게 선택하였는지 나타낸다.The makeup subject window 1402 represents a makeup subject on which makeup is evaluated. Makeup assessment may vary depending on makeup subject, even for the same face image. Thus, it indicates whether the user correctly selected the makeup subject.
종합 점수 윈도우(1403)는 얼굴 이미지의 메이크업 평가 결과를 종합한 점수를 나타낸다. 예를 들어, 종합 점수 윈도우(1403)는 얼굴 영역별 점수의 평균 값을 나타낼 수 있다. 이를 통해, 사용자는 자신의 메이크업 결과를 하나의 지표로 확인할 수 있다.The comprehensive score window 1403 represents a score in which the makeup evaluation results of the face image are combined. For example, the comprehensive score window 1403 may represent an average value of scores for each facial region. Through this, the user can check his makeup result as one index.
영역별 점수 윈도우(1404)는 얼굴 이미지를 영역 별로 구분하여 메이크업을 평가한 결과를 나타낸다. 이를 통해, 사용자는 어느 영역에 대한 메이크업을 보완해야 하는지 쉽게 알 수 있는 효과가 있다.The score window 1404 for each region indicates a result of evaluating makeup by dividing a face image by regions. Through this, the user can easily know which area the makeup should be complemented with.
메이크업 재평가 아이콘(1405)은 새로운 얼굴 이미지를 이용하여 메이크업 평가를 받기 위한 아이콘이다. 제어부(17)는 메이크업 재평가 아이콘(1405)을 선택하는 명령을 수신함에 따라 도 10에 도시된 바와 같은 메이크업 평가 요청 화면을 표시할 수 있다.The makeup reevaluation icon 1405 is an icon for receiving makeup evaluation using a new face image. The controller 17 may display a makeup evaluation request screen as shown in FIG. 10 in response to a command for selecting the makeup re-evaluation icon 1405.
이동 단말기(10)는 이와 같이 메이크업 평가 결과 화면을 표시하여, 메이크업 전문가에 의한 평가와 유사한 평가 서비스를 제공할 수 있다.The mobile terminal 10 may display the makeup evaluation result screen as described above and provide an evaluation service similar to the evaluation by the makeup expert.
본 발명의 일 실시 예에 따르면, 도 14a에 도시된 바와 같이 이동 단말기(10)는 선택된 메이크업 주제로 평가한 메이크업 평가 결과 화면을 표시할 수 있다.According to an embodiment of the present disclosure, as shown in FIG. 14A, the mobile terminal 10 may display a makeup evaluation result screen evaluated based on a selected makeup subject.
구체적으로, 메이크업 서버(100)는 사용자에 의한 메이크업 주제 선택을 수신하면, 선택된 주제에 따라 얼굴 이미지의 메이크업 점수를 산출할 수 있다. 이 경우, 이동 단말기(10)는 메이크업 주제 윈도우(1402)에 사용자에 의해 선택된 메이크업 주제를 표시하고, 종합 점수 윈도우(1403) 및 영역별 점수 윈도우(1404)에 선택된 메이크업 주제에 따른 점수를 표시할 수 있다.In detail, when the makeup server 100 receives a makeup subject selection by the user, the makeup server 100 may calculate a makeup score of the face image according to the selected subject. In this case, the mobile terminal 10 displays the makeup subject selected by the user in the makeup subject window 1402, and displays the score according to the makeup subject selected in the comprehensive score window 1403 and the region-specific score window 1404. Can be.
본 발명의 다른 실시 예에 따르면, 도 14b에 도시된 바와 같이 이동 단말기(10)는 최고 점수로 평가된 메이크업 주제에 메이크업 평가 결과 화면을 표시할 수 있다.According to another exemplary embodiment of the present disclosure, as illustrated in FIG. 14B, the mobile terminal 10 may display a makeup evaluation result screen on a makeup subject that is rated as the highest score.
구체적으로, 메이크업 서버(100)는 메이크업 주제별로 적어도 하나 이상의 얼굴 이미지의 메이크업 점수를 산출할 수 있다. 메이크업 서버(100)는 메이크업 주제별로 산출된 점수 중 최고 점수를 나타내는 메이크업 주제를 획득할 수 있다. 메이크업 서버(100)는 메이크업 주제별 메이크업 점수를 모두 이동 단말기(10)로 전송하거나 최고 점수 및 최고 점수에 대응하는 메이크업 주제만을 이동 단말기(10)로 전송할 수 있다.In detail, the makeup server 100 may calculate a makeup score of at least one or more face images for each makeup subject. The makeup server 100 may obtain a makeup theme representing the highest score among scores calculated for each makeup theme. The makeup server 100 may transmit all makeup scores for each makeup theme to the mobile terminal 10 or transmit only a makeup theme corresponding to the highest score and the highest score to the mobile terminal 10.
이 경우, 이동 단말기(10)는 메이크업 주제 윈도우(1402)에 최고 점수로 평가된 메이크업 주제를 표시하고, 종합 점수 윈도우(1402) 및 영역별 점수 윈도우(1404)에 최고 점수로 평가된 메이크업 주제에 따른 점수를 표시할 수 있다.In this case, the mobile terminal 10 displays the makeup subject evaluated as the highest score in the makeup subject window 1402, and displays the makeup subject evaluated as the highest score in the comprehensive score window 1402 and the region-specific score window 1404. The score can be displayed.
본 발명의 또 다른 실시 예에 따르면, 이동 단말기(10)는 사용자에 의해 선택된 메이크업 주제에 따른 점수와, 최고 점수로 평가된 메이크업 주제에 따른 점수를 동시에 표시할 수도 있다.According to another embodiment of the present invention, the mobile terminal 10 may simultaneously display the score according to the makeup theme selected by the user and the score according to the makeup theme evaluated as the highest score.
다음으로 도 15는 본 발명의 제1 실시 예에 따른 메이크업 평가에 메이크업 주제가 미치는 영향을 설명하기 위한 도면이다.Next, FIG. 15 is a view for explaining the influence of the makeup subject on the makeup evaluation according to the first embodiment of the present invention.
도 14a 내지 도 14b에서 설명한 바와 마찬가지로, 이동 단말기(10)의 디스플레이부(14)는 메이크업 평가 결과 화면을 표시할 수 있다.As described with reference to FIGS. 14A to 14B, the display unit 14 of the mobile terminal 10 may display a makeup evaluation result screen.
도 15의 메이크업 평가 결과 화면은, 도 14a 내지 도 14b와 비교하여, 동일한 얼굴 이미지를 대상으로 하나, 메이크업 주제를 달리한 경우의 메이크업 평가 결과 화면이다. 즉, 도 14a 내지 도 14b와 비교하여, 얼굴 이미지 윈도우(1501)는 동일한 얼굴 이미지를 포함하나, 메이크업 주제가(1502)가 상이하다.The makeup evaluation result screen of FIG. 15 is a makeup evaluation result screen when the same face image is targeted but different makeup subjects are compared with FIGS. 14A to 14B. That is, compared to FIGS. 14A-14B, the face image window 1501 includes the same face image, but the makeup theme song 1502 is different.
이에 따라, 메이크업 평가 결과가 상이함을 알 수 있다. 즉, 종합 점수 윈도우(1503)와 영역별 점수 윈도우(1404)가 도 14a 내지 도 14b와 비교하여 상이한 점수를 나타냄을 확인할 수 있다.Accordingly, it can be seen that the makeup evaluation results are different. That is, it can be seen that the composite score window 1503 and the region-specific score window 1404 show different scores compared to those of FIGS. 14A to 14B.
이를 통해, 사용자는 메이크업 스킬 뿐만 아니라, 메이크업 주제에 알맞은 메이크업을 배울 수 있는 효과가 예상된다.Through this, the user is expected to learn not only makeup skills but also makeup suitable for a makeup theme.
다음으로 도 16은 본 발명의 제1 실시 예에 따른 영역별 점수 윈도우를 설명하기 위한 도면이다.Next, FIG. 16 is a view for explaining a score window for each region according to the first embodiment of the present invention.
영역별 점수 윈도우(1604)는 메이크업 주제에 기초한 각 영역별 점수를 나타낸다. 따라서, 영역별 점수 윈도우(1604)는 메이크업 영역에 따라 상이한 점수를 나타낼 수 있다. 특히, 어느 하나의 얼굴 영역과 관련하여, 메이크업 주제와 완전히 상이한 메이크업으로 판단되면 0점 처리되는 영역이 있을 수 있다.An area score window 1604 shows a score for each area based on the makeup subject. Accordingly, the area-specific score window 1604 may represent different scores according to the makeup area. In particular, with respect to any one facial area, there may be an area that is zeroed if it is determined that the makeup is completely different from the makeup subject.
이를 통해, 사용자에게 얼굴 영역 별 메이크업을 가이드할 수 있는 효과가 있다.Through this, there is an effect that can guide the makeup for each face area to the user.
다음으로 도 17은 본 발명의 제1 실시 예에 따른 메이크업의 발란스 평가를 설명하기 위한 도면이다.Next, FIG. 17 illustrates balance evaluation of makeup according to the first embodiment of the present invention.
도 17을 참조하면, 종합 점수(1703)는 0점을 나타내나, 영역별 점수 윈도우(1704)의 각 영역은 0점이 아님을 확인할 수 있다. 이는, 얼굴 전체의 메이크업 발란스(balance)가 맞지 않음을 나타낼 수 있다. 즉, 메이크업 평가 시스템은 얼굴 영역별 메이크업을 평가할 뿐만 아니라 얼굴의 전체적인 메이크업 발란스까지 평가해줄 수 있다.Referring to FIG. 17, the comprehensive score 1703 represents 0 points, but each area of the score window 1704 for each area is not 0. This may indicate that makeup balance of the entire face is not correct. That is, the makeup evaluation system may not only evaluate makeup for each facial region but also evaluate overall makeup balance of the face.
다음으로 도 18은 본 발명의 제1 실시 예에 따른 노메이크업 평가 결과를 설명하기 위한 도면이다.Next, FIG. 18 is a view for explaining a no-makeup evaluation result according to the first embodiment of the present invention.
도 18을 참조하면, 종합 점수 윈도우(1803) 및 영역별 점수 윈도우(1804)는 모두 최저점을 나타낸다. 이는, 얼굴 이미지 윈도우(1801)의 얼굴이 노메이크업으로 판단된 경우이다. 이와 같이, 노메이크업의 얼굴에 대응하는 점수로 최저점을 산출함으로써, 메이크업 평가 시스템의 신뢰도를 향상시킬 수 있다.Referring to FIG. 18, both the comprehensive score window 1803 and the score window 1804 for each region represent the lowest points. This is a case where the face of the face image window 1801 is determined to be no makeup. In this way, by calculating the lowest point with a score corresponding to the face of no makeup, the reliability of the makeup evaluation system can be improved.
다음으로, 도 19는 본 발명의 제2 실시 예에 따른 메이크업 서버를 설명하기 위한 블록도이다.Next, FIG. 19 is a block diagram illustrating a makeup server according to a second exemplary embodiment of the present invention.
메이크업 서버(100)는 메이크업 DB 관리부(110), 메이크업 평가부(120), 제어부(130) 및 무선 통신부(140)로 구성될 수 있다.The makeup server 100 may include a makeup DB manager 110, a makeup evaluator 120, a controller 130, and a wireless communication unit 140.
먼저, 메이크업 DB 관리부(110)를 설명한다.First, the makeup DB management unit 110 will be described.
메이크업 DB 관리부(110)는 메이크업 평가와 관련된 다양한 데이터를 저장하고 있을 수 있다. The makeup DB manager 110 may store various data related to the makeup evaluation.
메이크업 DB 관리부(110)는 메이크업을 평가하기 위해 얼굴 영역에 적용되는 적어도 하나 이상의 평가 알고리즘을 저장하고 있을 수 있다. 여기서, 얼굴 영역은 이미지에 포함된 얼굴 영역으로, 후술하는 영역 검출부(124)에 의해 검출되는 얼굴 영역을 의미할 수 있다. 또한, 얼굴 영역은 얼굴 전체를 포함하는 영역과, 얼굴을 구성하는 얼굴 각 부위 영역을 포함할 수 있다. 예를 들어, 얼굴 영역은 얼굴 전체 영역, 눈썹 영역, 눈 영역, 코 영역, 볼 영역, 이미 영역, 턱 영역 및 입술 영역 등 중 적어도 하나 이상을 포함할 수 있다.The makeup DB manager 110 may store at least one evaluation algorithm applied to the face area to evaluate makeup. Here, the face area is a face area included in the image and may mean a face area detected by the area detector 124 to be described later. In addition, the face area may include an area including the entire face and each area of the face constituting the face. For example, the face area may include at least one or more of an entire face area, an eyebrow area, an eye area, a nose area, a ball area, an image area, a chin area, and a lip area.
평가 알고리즘은 이미지에 포함된 얼굴 영역을 구성하는 적어도 하나 이상의 픽셀의 RGB 값을 이용하는 알고리즘일 수 있다. 즉, 이미지는 RGB 값으로 표현되는 이미지일 수 있고, 평가 알고리즘은 얼굴 영역을 구성하는 픽셀의 RGB 값을 이용하는 알고리즘일 수 있다.The evaluation algorithm may be an algorithm that uses RGB values of at least one pixel constituting the face area included in the image. That is, the image may be an image represented by an RGB value, and the evaluation algorithm may be an algorithm using RGB values of pixels constituting the face area.
또한, 평가 알고리즘은 이미지에 포함된 얼굴 영역을 구성하는 적어도 하나 이상의 픽셀의 RGB 값을 Lab 색상값으로 변환하고, 변환된 Lab 색상값을 이용하는 알고리즘일 수 있다. 즉, 평가 알고리즘은 RGB 값으로 표현되는 이미지를 Lab 색 공간으로 변환하고, Lab 색상값을 통해 메이크업을 평가하는 알고리즘일 수 있다. Lab 색상값은 출력 매체에 따라 색이 달라지지 않기 때문에, Lab 색상값을 이용하는 평가 알고리즘을 적용시 출력 매체와 관계없이 일관된 평가가 가능하여, 메이크업 평가의 신뢰성을 확보 가능한 이점이 있다.In addition, the evaluation algorithm may be an algorithm that converts RGB values of at least one or more pixels constituting the face region included in the image into Lab color values and uses the converted Lab color values. In other words, the evaluation algorithm may be an algorithm for converting an image represented by an RGB value into a Lab color space and evaluating makeup through Lab color values. Since the lab color value does not vary depending on the output medium, when the evaluation algorithm using the lab color value is applied, consistent evaluation is possible regardless of the output medium, thereby ensuring the reliability of makeup evaluation.
또한, 메이크업 DB 관리부(110)는 메이크업 평가에 사용되는 점수 테이블(도 34 참고)을 저장하고 있을 수 있다. 구체적으로, 점수 테이블은 복수개의 제1 샘플 색상과 복수개의 제2 샘플 색상을 포함할 수 있다. 여기서, 제1 샘플 색상은 피부 색상을 대표하는 샘플 색상이고, 제2 샘플 색상은 입술 색상, 블러셔 색상 또는 아이섀도우 색상을 대표하는 샘플 색상일 수 있다.In addition, the makeup DB management unit 110 may store a score table (see FIG. 34) used for makeup evaluation. In detail, the score table may include a plurality of first sample colors and a plurality of second sample colors. Here, the first sample color may be a sample color representing a skin color, and the second sample color may be a sample color representing a lip color, a blusher color, or an eye shadow color.
복수개의 제1 샘플 색상 각각과 복수개의 제2 샘플 색상 각각은 팹핑되어 있을 수 있고, 한 쌍의 제1 샘플 색상과 제2 샘플 색상에는 점수 데이터가 맵핑되어 있을 수 있다. 즉, 점수 테이블은 복수개의 제1 샘플 색상 중 어느 하나와 복수개의 제2 샘플 색상 중 어느 하나에 맵핑되는 점수 데이터들로 구성될 수 있다.Each of the plurality of first sample colors and each of the plurality of second sample colors may be dope, and score data may be mapped to the pair of first sample colors and the second sample colors. That is, the score table may be composed of score data mapped to any one of the plurality of first sample colors and one of the plurality of second sample colors.
이와 같은, 점수 테이블은 메이크업의 색상 조화를 평가하는 경우 사용될 수 있다. 예를 들어, 메이크업 분석부(122)는 사용자의 얼굴 영역에서 제1 색상과, 제2 색상을 검출하고, 제1 색상과 제2 색상에 기초하여 메이크업의 색상 조화를 평가할 수 있다. 구체적으로, 메이크업 분석부(122)는 제1 색상과 동일한 색상을 복수개의 제1 샘플 색상에서 검색하고, 제2 색상과 동일한 색상을 복수개의 제2 샘플 색상에서 검색하고, 검색된 한 쌍의 색상에 맵핑되어 있는 점수를 획득하여 색상 조화를 평가할 수 있다.As such, the score table can be used when evaluating the color harmony of the makeup. For example, the makeup analyzer 122 may detect the first color and the second color in the face region of the user, and evaluate the color harmony of the makeup based on the first color and the second color. In detail, the makeup analyzer 122 searches for the same color as the first color in the plurality of first sample colors, searches for the same color as the second color in the plurality of second sample colors, and applies the searched pair to the colors. The color harmonization can be evaluated by obtaining a score that is mapped.
또한, 일 실시 예에 따르면, 점수 테이블은 메이크업 전문가의 메이크업 평가를 위한 입력에 기초하여 생성된 테이블일 수 있다. 즉, 점수 테이블은 메이크업 전문가들이 미리 색상 조합에 대하여 점수를 입력해둔 테이블일 수 있다.According to an embodiment of the present disclosure, the score table may be a table generated based on an input for makeup evaluation of a makeup expert. That is, the score table may be a table in which makeup experts input scores for color combinations in advance.
후술하는 메이크업 분석부(122)가 메이크업 전문가의 메이크업 평가를 위한 입력에 기초하여 생성된 테이블을 이용하여 메이크업을 평가하는 경우, 전문성에 기반한 메이크업 평가 결과를 사용자에게 제공 가능한 이점이 있다. 이에 따라, 메이크업 평가 시스템의 신뢰도가 증가될 수 있다.When the makeup analysis unit 122 to be described later evaluates makeup using a table generated based on an input for makeup evaluation of a makeup expert, there is an advantage that a makeup evaluation result based on professionalism may be provided to the user. Accordingly, the reliability of the makeup evaluation system can be increased.
다음으로, 메이크업 평가부(120)를 구체적으로 설명한다.Next, the makeup evaluation unit 120 will be described in detail.
메이크업 평가부(120)는 영역 검출부(124) 및 메이크업 분석부(122)로 구성될 수 있다.The makeup evaluator 120 may include an area detector 124 and a makeup analyzer 122.
영역 검출부(124)는 사진 또는 동영상에 포함된 얼굴 이미지를 획득할 수 있다. 구체적으로, 영역 검출부(124)는 무선 통신부(140)를 통해 사진 또는 동영상을 수신하고, 사진 또는 동영상에서 메이크업 평가의 대상이 되는 얼굴 이미지를 검출할 수 있다.The area detector 124 may acquire a face image included in a picture or a video. In detail, the area detector 124 may receive a picture or a video through the wireless communication unit 140, and detect a face image, which is a subject of makeup evaluation, from the picture or the video.
또한, 영역 검출부(124)는 얼굴 이미지에서 얼굴의 각 영역을 검출할 수 있다. 예를 들어, 영역 검출부(124)는 눈썹 영역, 눈 영역, 코 영역, 볼 영역, 입 영역 및 턱 영역 중 적어도 하나 이상을 검출할 수 있다.Also, the area detector 124 may detect each area of the face in the face image. For example, the area detector 124 may detect at least one or more of an eyebrow area, an eye area, a nose area, a ball area, a mouth area, and a jaw area.
예를 들어, 영역 검출부(124)는 얼굴 인식(face recognition) 알고리즘을 통해 얼굴과, 얼굴 각 부위를 검출할 수 있다.For example, the area detector 124 may detect a face and each part of the face through a face recognition algorithm.
또한, 영역 검출부(124)는 얼굴 인식시 딥러닝 기술을 통해 얼굴 및 얼굴 부위를 보다 정확하게 인식 가능한 이점이 있다.In addition, the area detection unit 124 has an advantage of recognizing a face and a part of a face more accurately through deep learning technology during face recognition.
메이크업 분석부(122)는 영역 검출부(124)가 획득한 얼굴 영역 및 얼굴 각 영역의 메이크업을 분석한다. 예를 들어, 메이크업 분석부(122)는 얼굴 영역 및 얼굴 각 영역을 메이크업 DB 관리부(110)에 저장된 점수 테이블 및 평가 알고리즘에 기초하여 메이크업을 분석할 수 있다. 구체적은 방법은 후술하기로 한다.The makeup analyzer 122 analyzes the makeup of the face area and each face area acquired by the area detector 124. For example, the makeup analyzer 122 may analyze the makeup of the face area and each face area based on a score table and an evaluation algorithm stored in the makeup DB manager 110. A detailed method will be described later.
실시 예에 따라, 메이크업 분석부(122)는 메이크업 분석 결과에 기초하여 얼굴 이미지의 메이크업 점수를 산출한다. 예를 들어, 메이크업 분석부(122)는 메이크업 종합 점수와 얼굴 영역별 점수를 각각 산출할 수 있다.According to an embodiment, the makeup analyzer 122 calculates a makeup score of the face image based on the makeup analysis result. For example, the makeup analyzer 122 may calculate a makeup comprehensive score and a score for each facial region.
제어부(130)는 메이크업 서버(100)의 전반적인 동작을 제어한다. 구체적으로, 제어부(130)는 메이크업 DB 관리부(110), 메이크업 평가부(120) 및 무선 통신부(140)의 동작을 제어할 수 있다.The controller 130 controls the overall operation of the makeup server 100. In detail, the controller 130 may control operations of the makeup DB manager 110, the makeup evaluator 120, and the wireless communicator 140.
무선 통신부(140)는 외부와 데이터를 송수신할 수 있다. 예를 들어, 무선 통신부(140)는 이동 단말기(10) 또는 어플리케이션 서버(200)로부터 이미지 데이터를 수신할 수 있다. 무선 통신부(140)는 수신된 이미지 데이터를 메이크업 DB 관리부(110) 또는 메이크업 평가부(120)로 전달할 수 있다.The wireless communication unit 140 may transmit and receive data with the outside. For example, the wireless communication unit 140 may receive image data from the mobile terminal 10 or the application server 200. The wireless communication unit 140 may transfer the received image data to the makeup DB manager 110 or the makeup evaluator 120.
한편, 이하에서 설명하는 실시 예는 예를 들어, 소프트웨어, 하드웨어 또는 이들의 조합된 것을 이용하여 컴퓨터 또는 이와 유사한 장치로 읽을 수 있는 기록매체 내에서 구현될 수 있다.Meanwhile, the embodiments described below may be implemented in a recording medium that may be read by a computer or a similar device using, for example, software, hardware, or a combination thereof.
다음으로, 도 20은 본 발명의 제2 실시 예에 따른 메이크업 평가 시스템의 동작 방법을 나타내는 래더 다이어그램이다.Next, FIG. 20 is a ladder diagram illustrating a method of operating a makeup evaluation system according to a second embodiment of the present invention.
도 20에서는 설명의 편의를 위해 이동 단말기(10)는 메이크업 서버(100)와 신호를 송수신하고, 도 1에서 설명한 어플리케이션 서버(200)는 메이크업 서버(100)에 포함된 것으로 가정한다.In FIG. 20, for convenience of description, it is assumed that the mobile terminal 10 transmits and receives a signal to and from the makeup server 100, and the application server 200 described in FIG. 1 is included in the makeup server 100.
이동 단말기(10)의 제어부(17)는 이미지를 획득할 수 있다(S11).The controller 17 of the mobile terminal 10 may acquire an image (S11).
제어부(17)는 무선 통신부(11) 또는 카메라(13)를 통해 이미지를 획득할 수 있다. 무선 통신부(11)는 외부로부터 이미지를 수신할 수 있다. 카메라(13)는 사진 또는 동영상을 촬영하여 이미지를 획득할 수 있다.The controller 17 may acquire an image through the wireless communication unit 11 or the camera 13. The wireless communication unit 11 may receive an image from the outside. The camera 13 may acquire an image by taking a picture or a video.
사용자는 메이크업을 평가 받기 위해 메이크업된 얼굴 이미지를 이동 단말기(10)에 전송 또는 입력할 수 있다. 예를 들어, 사용자는 외부에 저장된 이미지를 이동 단말기(10)에 전송하거나, 이동 단말기(10)의 카메라(13)로 메이크업된 얼굴을 촬영할 수 있다.The user may transmit or input the makeup face image to the mobile terminal 10 to evaluate the makeup. For example, the user may transmit an image stored externally to the mobile terminal 10 or take a face photographed by the camera 13 of the mobile terminal 10.
이동 단말기(10)의 제어부(17)는 메이크업 평가 요청명령을 수신할 수 있다(S11).The controller 17 of the mobile terminal 10 may receive a makeup evaluation request command (S11).
제어부(17)는 입력부(12)를 통해 메이크업 평가 요청명령을 수신할 수 있다. 입력부(12)는 적어도 하나 이상의 이미지를 선택하는 명령을 수신한 후 메이크업 평가 요청명령을 수신할 수 있다.The controller 17 may receive a makeup evaluation request command through the input unit 12. The input unit 12 may receive a makeup evaluation request command after receiving a command for selecting at least one image.
사용자는 메이크업 평가를 받고 싶은 이미지를 선택한 후 메이크업 평가 요청을 입력부(12)로 입력할 수 있다.The user may select an image to receive makeup evaluation and input a makeup evaluation request to the input unit 12.
또한, 제어부(17)는 메이크업 평가 요청명령을 수신시 입력부(12)를 통해 메이크업 주제 선택 명령을 더 수신할 수 있다. 메이크업 주제는 내추럴(natural), 러블리(lovely) 또는 스모키(smoky) 등을 포함할 수 있다. 메이크업 주제에 따라 메이크업 평가는 상이할 수 있다. 따라서, 보다 정확한 메이크업 평가를 위해 제어부(17)는 메이크업 평가 요청명령을 수신하는 경우 메이크업 주제를 선택하는 명령을 함께 수신할 수 있다.In addition, the controller 17 may further receive a makeup subject selection command through the input unit 12 when receiving the makeup evaluation request command. Makeup subjects may include natural, lovely or smokey, and the like. The makeup evaluation may differ depending on the makeup subject. Therefore, for more accurate makeup evaluation, the controller 17 may receive a command for selecting a makeup subject when receiving a makeup evaluation request command.
이동 단말기(10)의 제어부(17)는 메이크업 서버(100)로 메이크업 평가 요청신호를 전송할 수 있다(S15).The controller 17 of the mobile terminal 10 may transmit a makeup evaluation request signal to the makeup server 100 (S15).
제어부(17)는 무선 통신부(11)를 통해 메이크업 서버(100)로 메이크업 평가 요청신호를 전송하도록 제어할 수 있다. The controller 17 may control to transmit the makeup evaluation request signal to the makeup server 100 through the wireless communication unit 11.
메이크업 평가 요청신호는 이미지 데이터를 포함할 수 있다. 즉, 제어부(17)는 단계 S11에서 획득한 이미지에 대응하는 이미지 데이터를 포함하는 메이크업 평가 요청신호를 메이크업 서버(100)로 전송할 수 있다.The makeup evaluation request signal may include image data. That is, the controller 17 may transmit the makeup evaluation request signal including the image data corresponding to the image acquired in step S11 to the makeup server 100.
메이크업 평가 요청신호는 이미지 데이터에 따른 이미지에 포함된 얼굴에 대응하는 메이크업 평가를 요청하는 신호일 수 있다.The makeup evaluation request signal may be a signal for requesting makeup evaluation corresponding to a face included in an image according to the image data.
메이크업 서버(100)의 무선 통신부(140)는 메이크업 평가 요청신호를 수신할 수 있다.The wireless communication unit 140 of the makeup server 100 may receive a makeup evaluation request signal.
메이크업 서버(100)의 제어부(130)는 메이크업 평가 요청신호에 포함된 이미지 데이터를 분석할 수 있다. 구체적으로, 메이크업 평가 요청신호에 포함된 이미지 데이터는 이미지 전송을 위해 변조된 데이터일 수 있다. 제어부(130)는 메이크업 평가 요청신호에 포함된 이미지 데이터를 이미지로 복원할 수 있다.The controller 130 of the makeup server 100 may analyze image data included in the makeup evaluation request signal. In detail, the image data included in the makeup evaluation request signal may be data modulated for image transmission. The controller 130 may restore the image data included in the makeup evaluation request signal to the image.
메이크업 서버(100)의 제어부(130)는 메이크업 평가 요청신호를 통해 수신된 이미지에서 기 설정된 영역을 검출할 수 있다(S17).The controller 130 of the makeup server 100 may detect a predetermined area in the image received through the makeup evaluation request signal (S17).
제어부(130)는 메이크업 평가의 대상이 되는 적어도 하나 이상의 평가 부문을 미리 설정하고 있을 수 있다. 제어부(130)는 평가 부문에 대응하는 적어도 하나 이상의 영역을 이미지에서 검출할 수 있다.The controller 130 may preset at least one or more evaluation sections that are the subject of makeup evaluation. The controller 130 may detect at least one or more regions corresponding to the evaluation section in the image.
예를 들어, 제어부(130)는 눈썹(eyebrow) 부문, 다크서클(dark circle) 부문, 색상조화(hue harmony) 부문, 입술(lip) 부문 및 잡티(blemish) 부문 중 적어도 하나 이상을 메이크업 평가의 대상이 되는 평가 부문으로 설정하고 있을 수 있다. 그러나, 앞에서 나열한 평가 부문은 설명의 편의를 위해 예시로 든 것에 불과하므로 이에 제한될 필요는 없다.For example, the controller 130 may determine at least one of eyebrow, dark circle, hue harmony, lip, and blemish parts of the makeup evaluation. It may be set as the target evaluation section. However, the evaluation sections listed above are merely examples for convenience of explanation and need not be limited thereto.
제어부(130)는 눈썹 부문을 평가하기 위한 눈썹 영역, 다크서클 부문을 평가하기 위한 다크서클 영역, 색상조화 부문을 평가하기 위한 색상조화 영역, 입술 부문을 평가하기 위한 입술 영역 및 잡티 부문을 평가하기 위한 잡티 영역 중 적어도 하나 이상의 영역을 수신된 이미지에 검출할 수 있다.The controller 130 evaluates the eyebrow area for evaluating the eyebrow section, the dark circle area for evaluating the dark circle section, the color harmony area for evaluating the color harmony section, the lip area for evaluating the lip section, and the blemish section. At least one or more areas of the blemish area may be detected in the received image.
이 때, 눈썹 영역, 다크서클 영역 등과 같은 각각의 영역은 해당 부위에 제한되지 않으며, 영역별 평가 알고리즘에 따라 적어도 하나 이상의 부위를 포함할 수 있다. 예를 들어, 눈썹 영역은 눈썹 부위에 제한되지 않고 눈썹 부위 및 코 부위 등을 포함할 수 있다. 이는, 눈썹 영역을 평가한다고 하여 눈썹의 모양, 색상 등만을 평가하는 것이 아니라, 눈썹 부위와 얼굴 전체의 조화 등을 고려하여 평가하기 위함이다. 각각의 영역에 대응하는 검출 부위는 후술하는 영역별 평가 알고리즘을 통해 자세히 설명하기로 한다.In this case, each area such as an eyebrow area, a dark circle area, etc. is not limited to the corresponding area, and may include at least one or more areas according to the evaluation algorithm for each area. For example, the eyebrow area is not limited to the eyebrow area and may include an eyebrow area, a nose area, and the like. This is to evaluate the eyebrow area, not only to evaluate the shape, color, etc. of the eyebrows, but also to consider the balance between the eyebrow area and the entire face. The detection part corresponding to each area will be described in detail through an evaluation algorithm for each area which will be described later.
제어부(130)는 검출된 적어도 하나 이상의 영역에 각 영역별 평가 알고리즘을 적용하여 평가할 수 있다(S19).The controller 130 may apply the evaluation algorithm for each region to the detected at least one region (S19).
제어부(130)는 검출된 영역에 따라 상이한 평가 알고리즘을 적용할 수 있다. 예를 들어, 제어부(130)는 눈썹 영역에는 제1 평가 알고리즘을 적용하고, 다크서클 영역에는 제2 평가 알고리즘을 적용할 수 있다.The controller 130 may apply different evaluation algorithms according to the detected areas. For example, the controller 130 may apply the first evaluation algorithm to the eyebrow area and the second evaluation algorithm to the dark circle area.
이와 같이, 본 발명의 실시 예에 따르면 검출된 복수의 영역에 동일한 평가 알고리즘을 적용하여 일관되게 평가하는 것이 아니라, 검출된 영역에 따라 각각 상이한 평가 알고리즘을 적용하여 메이크업을 보다 정밀하게 평가 가능한 이점이 있다.As described above, according to the exemplary embodiment of the present invention, the same evaluation algorithm is not applied to the plurality of detected areas to evaluate the makeup consistently, but the different evaluation algorithms are applied to the detected areas to make the makeup more accurate. have.
다음으로, 제어부(130)가 단계 S17에서 검출된 영역 각각에 적용하는 적어도 하나 이상의 평가 알고리즘을 설명한다.Next, at least one evaluation algorithm that the control unit 130 applies to each of the areas detected in step S17 will be described.
먼저, 도 21 내지 도 25는 본 발명의 제2 실시 예에 따른 눈썹 부문 평가시 적용되는 평가 알고리즘을 설명하기 위한 도면이고, 도 26은 본 발명의 제2 실시 예에 따른 눈썹 부문의 평가결과를 표시하는 방법을 설명하기 위한 예시 도면이다.First, FIGS. 21 to 25 are diagrams for describing an evaluation algorithm applied to the evaluation of an eyebrow section according to a second embodiment of the present invention, and FIG. 26 illustrates an evaluation result of the eyebrow section according to the second embodiment of the present invention. It is an exemplary figure for demonstrating the method of displaying.
본 발명의 실시 예에 따르면, 메이크업 서버(100)의 메이크업 DB 관리부(110)는 눈썹 부문을 평가하기 위한 평가 알고리즘을 저장하고 있을 수 있다. 제어부(130)는 단계 S17에서 영역 검출부(124)를 통해 눈썹 영역을 검출하고, 메이크업 분석부(122)를 통해 검출된 눈썹 영역에 평가 알고리즘을 적용할 수 있다.According to an embodiment of the present disclosure, the makeup DB manager 110 of the makeup server 100 may store an evaluation algorithm for evaluating an eyebrow section. The controller 130 may detect the eyebrow area through the area detector 124 in step S17, and apply an evaluation algorithm to the eyebrow area detected through the makeup analyzer 122.
눈썹 부문을 평가하기 위한 평가 알고리즘은 복수개의 알고리즘을 포함할 수 있고, 각각의 알고리즘은 눈썹 부문을 상이하게 평가할 수 있다. 예를 들어, 눈썹 부문을 평가하기 위한 평가 알고리즘은 눈썹 길이를 평가하는 알고리즘, 수평 정도를 평가하는 알고리즘, 눈썹 앞길이를 평가하는 알고리즘 및 눈썹 색상의 균일성을 평가하는 알고리즘 등을 포함할 수 있다.The evaluation algorithm for evaluating the eyebrow section may include a plurality of algorithms, and each algorithm may evaluate the eyebrow section differently. For example, the evaluation algorithm for evaluating the eyebrow section may include an algorithm for evaluating eyebrow length, an algorithm for evaluating horizontal degree, an algorithm for evaluating eyebrow length, and an algorithm for evaluating uniformity of eyebrow color. .
메이크업 분석부(122)는 눈썹 영역에 평가 알고리즘을 적용시, 눈썹 길이, 수평 정도, 눈썹 앞길이 및 눈썹 색상의 균일성을 모두 분석 및 평가할 수 있다.When applying the evaluation algorithm to the eyebrow area, the makeup analyzer 122 may analyze and evaluate all the uniformity of the eyebrow length, the horizontal degree, the eyebrow length, and the eyebrow color.
도 21를 참고하여, 제어부(130)가 눈썹 영역의 눈썹 길이를 평가하는 방법을 설명한다.Referring to FIG. 21, a method of evaluating the eyebrow length of the eyebrow region by the controller 130 will be described.
영역 검출부(124)는 이미지에서 어느 하나의 눈의 바깥쪽 끝인 제1 지점(501)과, 코의 바깥쪽 끝인 제2 지점(502)을 검출할 수 있다. 이 때, 영역 검출부(124)는 오른쪽 눈의 바깥쪽 끝 지점을 검출한 경우 코의 오른쪽 끝을 검출하고, 왼쪽 눈의 바깥쪽 끝 지점을 검출한 코의 왼쪽 끝을 검출할 수 있다.The area detector 124 may detect the first point 501, which is the outer end of one eye, and the second point 502, which is the outer end of the nose, in the image. At this time, the area detector 124 may detect the right end of the nose when detecting the outer end point of the right eye, and may detect the left end of the nose that detects the outer end point of the left eye.
메이크업 분석부(122)는 제1 지점(501)과 제2 지점(502)을 연결하는 직선(510)을 획득할 수 있다.The makeup analyzer 122 may acquire a straight line 510 connecting the first point 501 and the second point 502.
메이크업 분석부(122)는 눈썹의 바깥쪽 끝인 제3 지점(503)을 검출하고, 제3 지점(503)과 직선(510)과의 거리(d1)를 산출할 수 있다.The makeup analyzer 122 may detect the third point 503, which is an outer end of the eyebrow, and calculate a distance d1 between the third point 503 and the straight line 510.
메이크업 분석부(122)는 산출된 거리(d1)를 통해 눈썹 길이의 적절성을 판단할 수 있다. 예를 들어, 메이크업 분석부(122)는 산출된 거리(d1)가 제1 거리 이하이면 눈썹 길이를 '짧음'으로 판단하고, 산출된 거리(d1)가 제1 거리보다 길고 제2 거리 이하이면 눈썹 길이를 '적정'으로 판단하고, 산출된 거리(d1)가 제2 거리보다 길면 눈썹 길이를 '김'으로 판단할 수 있다. 그러나, 이와 같은 판단 방법은 예시적인 것에 불과하므로 이에 제한될 필요는 없다. The makeup analyzer 122 may determine the appropriateness of the eyebrow length based on the calculated distance d1. For example, the makeup analyzer 122 determines the eyebrow length as 'short' when the calculated distance d1 is less than or equal to the first distance, and when the calculated distance d1 is longer than or equal to the first distance and less than or equal to the second distance. The eyebrow length may be determined as 'titration', and when the calculated distance d1 is longer than the second distance, the eyebrow length may be determined as 'long'. However, such a determination method is merely exemplary and need not be limited thereto.
도 22을 참고하여, 제어부(130)가 눈썹 영역의 수평 정도를 평가하는 방법을 설명한다.Referring to FIG. 22, a method in which the controller 130 evaluates the horizontal degree of the eyebrow area will be described.
영역 검출부(124)는 이미지에서 어느 하나의 눈썹을 기준으로 눈썹의 안쪽 끝인 제1 지점(601)과, 같은 눈썹의 바깥쪽 끝인 제2 지점(602)을 검출할 수 있다.The area detector 124 may detect a first point 601, which is an inner end of the eyebrow, and a second point 602, which is an outer end of the same eyebrow, based on any one eyebrow in the image.
영역 검출부(124)는 제1 지점(601)과 제2 지점(602)을 연결하는 직선(610)을 획득하고, 직선과 수평선(620)과의 각도(θ)를 산출할 수 있다.The area detector 124 may acquire a straight line 610 connecting the first point 601 and the second point 602 and calculate an angle θ between the straight line and the horizontal line 620.
메이크업 분석부(122)는 산출된 각도(θ)를 통해 눈썹의 수평 정도의 적절성을 판단할 수 있다.The makeup analyzer 122 may determine the appropriateness of the horizontal degree of the eyebrows based on the calculated angle θ.
예를 들어, 메이크업 분석부(122)는 산출된 각도(θ)가 제1 각도 이하이면 눈썹의 수평 정도를 '직선형'로 판단하고, 산출된 각도(θ)가 제1 각보보다 크고 제2 각도 이하이면 눈썹의 수평 정도를 '일반형'으로 판단하고, 산출된 각도(θ)가 제2 각도보다 크면 눈썹의 수평 정도를 '아치형'으로 판단할 수 있다. 그러나, 이와 같은 판단 방법은 예시적인 것에 불과하므로 이에 제한될 필요는 없다.For example, when the calculated angle θ is equal to or less than the first angle, the makeup analyzer 122 determines the horizontal degree of the eyebrows as 'linear', and the calculated angle θ is greater than the first angle beam and the second angle. Below, the horizontal degree of the eyebrows may be determined as the 'normal type', and when the calculated angle θ is greater than the second angle, the horizontal degree of the eyebrows may be determined as the 'arch type'. However, such a determination method is merely exemplary and need not be limited thereto.
또한, 메이크업 분석부(122)는 '직선형', '일반형' 또는 '아치형'과 같은 눈썹의 수평 정도에 따른 눈썹 유형을 판단하고, 판단한 각각의 유형에 따라 눈썹 모양의 점수를 산출할 수 있다. 예를 들어, 눈썹 유형에 따른 눈썹 모양을 나타내는 데이터를 미리 저장하고 있을 수 있고, 메이크업 분석부(122)는 판단한 눈썹 유형에 따른 눈썹 모양 데이터를 이미지에서 획득된 눈썹 모양의 데이터와 비교하고, 비교 결과 저장된 데이터와 이미지에서 획득된 데이터의 차가 작을수록 눈썹 점수를 높게 산출하는 방식으로 점수를 결정할 수도 있다.In addition, the makeup analysis unit 122 may determine the eyebrow type according to the horizontal degree of the eyebrows such as 'straight', 'normal' or 'arch', and calculate the score of the eyebrow shape according to each type determined. For example, the eyebrow shape data according to the eyebrow type may be stored in advance, and the makeup analyzer 122 compares the eyebrow shape data according to the determined eyebrow type with the data of the eyebrow shape obtained from the image, and compares them. As a result, the smaller the difference between the stored data and the acquired data in the image, the higher the eyebrow score, the score may be determined.
도 23을 참고하여, 제어부(130)가 눈썹 영역의 눈썹 앞길이를 평가하는 방법을 설명한다.Referring to FIG. 23, a method of evaluating the eyebrow length of the eyebrow region by the controller 130 will be described.
영역 검출부(124)는 이미지에서 어느 하나의 눈썹을 기준으로 눈썹의 안쪽 ?P인 제1 지점(701)과, 코의 바깥쪽 끝인 제2 지점(702)을 검출할 수 있다. 이 때, 영역 검출부(124)는 오른쪽 눈썹의 안쪽 끝 지점을 검출한 경우 코의 오른쪽 끝을 검출하고, 왼쪽 눈썹의 안쪽 끝 지점을 검출한 코의 왼쪽 끝을 검출할 수 있다.The area detector 124 may detect the first point 701, which is the inner? P of the eyebrow, and the second point 702, which is the outer end of the nose, based on any one eyebrow in the image. At this time, the area detector 124 may detect the right end of the nose when detecting the inner end of the right eyebrow, and may detect the left end of the nose that detects the inner end of the left eyebrow.
메이크업 분석부(122)는 제2 지점(702)을 수직방향으로 지나는 직선(710)을 획득하고, 직선(710)과 제1 지점(701) 사이의 거리(d2)를 획득할 수 있다.The makeup analyzer 122 may acquire a straight line 710 passing through the second point 702 in the vertical direction, and obtain a distance d2 between the straight line 710 and the first point 701.
메이크업 분석부(122)는 산출된 거리(d2)를 통해 눈썹 앞길이의 적절성을 판단할 수 있다. 예를 들어, 메이크업 분석부(122)는 산출된 거리(d2)가 제1 거리 이하이면 눈썹 앞길이를 '짧음'으로 판단하고, 산출된 거리(d2)가 제1 거리보다 길고 제2 거리 이하이면 눈썹 앞길이를 '적정'으로 판단하고, 산출된 거리(d2)가 제2 거리보다 길면 눈썹 앞길이를 '김'으로 판단할 수 있다. 그러나, 이와 같은 판단 방법은 예시적인 것에 불과하므로 이에 제한될 필요는 없다. The makeup analyzer 122 may determine the adequacy of the eyebrow length based on the calculated distance d2. For example, when the calculated distance d2 is less than or equal to the first distance, the makeup analyzer 122 determines the eyebrow length as 'short', and the calculated distance d2 is longer than or equal to the first distance and less than or equal to the second distance. In this case, the front eyebrow length may be determined as 'titration', and when the calculated distance d2 is longer than the second distance, the front eyebrow length may be determined as 'long'. However, such a determination method is merely exemplary and need not be limited thereto.
도 21 내지 도 23에서 설명한 방법에 따르면, 사람마다 상이한 눈의 크기, 코의 크기 등을 반영한 메이크업 평가가 가능한 이점이 있다. 즉, 눈썹은 5cm가 적당하다고 일관적으로 판단하는 것이 아니라, 사람의 눈의 길이, 코의 길이, 눈썹의 위치 등 사용자마다 상이한 얼굴 특성을 고려하여 메이크업의 평가가 가능한 이점이 있다.According to the method described with reference to FIGS. 21 to 23, makeup evaluation reflecting different eye sizes, nose sizes, and the like for each person is possible. That is, the eyebrows are not consistently judged as 5 cm is appropriate, but there is an advantage that the makeup can be evaluated in consideration of different facial features such as the length of the eye, the length of the nose, and the position of the eyebrows for each user.
도 24 내지 도 25를 참고하여, 제어부(130)는 눈썹 영역의 눈썹 색상의 균일성을 평가하는 방법을 설명한다. 눈썹 색상의 균일성은 눈썹의 색상이 고르게 메이크업 되었는지를 나타내는 항목일 수 있다.24 to 25, the controller 130 describes a method of evaluating the uniformity of the eyebrow color of the eyebrow area. The uniformity of the eyebrow color may be an item indicating whether the color of the eyebrows is evenly makeup.
제어부(130)는 눈썹 색상의 균일성을 판단하기 위해 눈썹 판단 동작을 수행할 수 있다.The controller 130 may perform an eyebrow determination operation to determine the uniformity of the eyebrow color.
영역 검출부(124)는 눈썹에서 제1 내지 제5 지점(801 내지 805)을 검출할 수 있다. 구체적으로, 영역 검출부(124)는 눈썹의 바깥쪽 끝인 제1 지점(801)과, 눈썹의 안쪽 끝인 제2 지점(802)을 검출하고, 눈썹 중 제1 지점(801)과 제2 지점(802)의 가운데인 제3 지점(803)을 검출하고, 눈썹 중 제1 지점(801)과 제3 지점(803)의 가운데인 제4 지점(804)과 제2 제점(802)과 제3 지점(803)의 가운데인 제5 지점(805)을 검출할 수 있다.The area detector 124 may detect the first to fifth points 801 to 805 in the eyebrows. Specifically, the area detector 124 detects the first point 801, which is the outer end of the eyebrow, and the second point 802, which is the inner end of the eyebrow, and the first point 801 and the second point 802 of the eyebrows. The third point 803, which is the center of the eyebrows, and the fourth point 804, the second point 802, and the third point 804, which is the center of the first point 801 and the third point 803 of the eyebrows. A fifth point 805 which is the center of 803 may be detected.
도 24(a)에 도시된 바와 같이, 메이크업 분석부(122)는 제1 내지 제5 지점(801 내지 805)을 연결하는 곡선(810)을 획득할 수 있다.As shown in FIG. 24A, the makeup analyzer 122 may acquire a curve 810 connecting the first to fifth points 801 to 805.
도 24(b)에 도시된 바와 같이, 메이크업 분석부(122)는 곡선(810)을 따라 곡선의 각 지점에서 수직선(820)을 획득하고, 수직선(820)에서 영상의 값을 추출할 수 있다. 여기서, 영상의 값은 RGB 값을 포함할 수 있고, 수직선(820)은 소정 길이를 갖는 직선일 수 있다.As shown in FIG. 24B, the makeup analyzer 122 may acquire a vertical line 820 at each point of the curve along the curve 810, and extract a value of an image from the vertical line 820. . Here, the value of the image may include an RGB value, and the vertical line 820 may be a straight line having a predetermined length.
메이크업 분석부(122)는 수직선(820)을 따라 추출된 RGB 값 중 최대값을 획득하고, 최대값과, 최대값으로부터 일정 비율 이내의 영상의 값을 갖는 지점을 눈썹으로 판단할 수 있다. 예를 들어, 메이크업 분석부(122)는 추출된 영상의 값이 40, 41, 120, 122, 126, 43, 40인 경우 최대값 126을 획득하고, 126으로부터 20% 이내의 영상의 값인 120 및 122인 지점과 126인 지점을 눈썹으로 판단할 수 있다.The makeup analyzer 122 may acquire a maximum value of the RGB values extracted along the vertical line 820, and determine a point having the maximum value and an image value within a predetermined ratio from the maximum value as an eyebrow. For example, the makeup analyzer 122 acquires the maximum value 126 when the value of the extracted image is 40, 41, 120, 122, 126, 43, 40, and 120 and 20 which is a value of the image within 20% from 126. The point of 122 people and the point of 126 can be judged by eyebrows.
메이크업 분석부(122)는 앞에서 설명한 눈썹 판단 동작을 이미지의 gray channel, red channel 및 blue channel에서 각각 수행할 수 있다.The makeup analyzer 122 may perform the eyebrow determination operation described above on the gray channel, the red channel, and the blue channel of the image, respectively.
도 25(a)가 원본 이미지인 경우, 메이크업 분석부(122)는 gray channel에서 눈썹 판단 동작을 수행하여 도 25(b)에 도시된 바와 같이 제1 영역(901)을 눈썹으로 판단할 수 있다. 또한, 메이크업 분석부(122)는 red channel에서 눈썹 판단 동작을 수행하여 도 25(c)에 도시된 바와 같이 제2 영역(902)을 눈썹으로 판단할 수 있다.When FIG. 25 (a) is the original image, the makeup analyzer 122 may determine the first area 901 as the eyebrow as illustrated in FIG. 25 (b) by performing the eyebrow determination operation in the gray channel. . In addition, the makeup analyzer 122 may determine the second area 902 as the eyebrow as illustrated in FIG. 25C by performing the eyebrow determination operation in the red channel.
메이크업 분석부(122)는 제1 영역(901)과 제2 영역(902)의 유사도를 측정할 수 있다. 메이크업 분석부(122)는 제1 영역(901)과 제2 영역(902)의 겹치는 영역의 넓이를 통해 유사도를 측정할 수 있다. 즉, 메이크업 분석부(122)는 제1 영역(901)과 제2 영역(902)의 겹치는 영역의 넓이가 넓을수록 유사도를 높게 산출하고, 겹치는 영역의 넓이가 좁을수록 유사도를 낮게 산출할 수 있다.The makeup analyzer 122 may measure the similarity between the first region 901 and the second region 902. The makeup analyzer 122 may measure the degree of similarity through an area of an overlapping area between the first area 901 and the second area 902. That is, the makeup analyzer 122 may calculate the similarity as the width of the overlapping areas of the first region 901 and the second region 902 is wider, and calculate the similarity as the width of the overlapping areas is narrower. .
메이크업 분석부(122)는 산출된 유사도를 통해 눈썹 균일성을 판단할 수 있다. 예를 들어, 메이크업 분석부(122)는 산출된 유사도가 제1 기준값 이하이면 눈썹 균일성을 '불균일'으로 판단하고, 산출된 유사도가 제2 기준값을 초과하면 눈썹 균일성을 '균일'으로 판단할 수 있다. 그러나, 이와 같은 판단 방법은 예시적인 것에 불과하므로 이에 제한될 필요는 없다.The makeup analyzer 122 may determine the eyebrow uniformity based on the calculated similarity. For example, the makeup analyzer 122 determines eyebrow uniformity as 'nonuniformity' when the calculated similarity is less than or equal to the first reference value, and determines eyebrow uniformity as 'uniformity' when the calculated similarity exceeds the second reference value. can do. However, such a determination method is merely exemplary and need not be limited thereto.
한편, 메이크업 분석부(122)는 red channel에서 눈썹으로 판단한 제2 영역(902)의 넓이가 기준 넓이 이하인 경우에는 blue channel 이미지에서 눈썹 판단 동작을 수행하여 도 25(d)에 도시된 바와 같이 제3 영역(903)을 눈썹으로 판단할 수 있다. 메이크업 분석부(122)는 제1 영역(901)과 제3 영역(903)의 유사도를 축정하여 앞에서 설명한 바와 같이 눈썹 균일성을 판단할 수 있다.Meanwhile, when the width of the second region 902 determined as the eyebrow in the red channel is less than or equal to the reference width, the makeup analyzer 122 performs the eyebrow determination operation on the blue channel image, as shown in FIG. 25 (d). The three regions 903 can be determined as eyebrows. The makeup analyzer 122 may determine the eyebrow uniformity as described above by calculating the similarity between the first region 901 and the third region 903.
무선 통신부(140)는 메이크업을 평가한 후 평가 결과 신호를 이동 단말기(10)로 전송할 수 있고, 이동 단말기(10)의 디스플레이부(14)는 평가결과를 표시할 수 있다. The wireless communication unit 140 may transmit the evaluation result signal to the mobile terminal 10 after evaluating the makeup, and the display unit 14 of the mobile terminal 10 may display the evaluation result.
도 26은 눈썹 부문에 대한 메이크업 평가결과를 나타내는 예시 도면일 수 있다. 디스플레이부(14)는 눈썹 길이, 수평 정도, 눈썹 앞길이 및 눈썹 균일성에 대한 평가 결과를 표시할 수 있다. 그러나, 도 26에 도시된 평가결과를 나타내는 방법은 예시적인 것에 불과하다.FIG. 26 may be an exemplary diagram illustrating a result of evaluating makeup for an eyebrow section. FIG. The display unit 14 may display an evaluation result of the eyebrow length, the horizontal degree, the eyebrow front length, and the eyebrow uniformity. However, the method of showing the evaluation result shown in FIG. 26 is merely exemplary.
다음으로, 도 27 내지 도 28는 본 발명의 제2 실시 예에 따른 다크서클 부문 평가시 적용되는 평가 알고리즘을 설명하기 위한 도면이고, 도 29은 본 발명의 실시 예에 따른 다크서클 부문의 평가결과를 표시하는 방법을 설명하기 위한 예시 도면이다.Next, FIGS. 27 to 28 are diagrams for explaining an evaluation algorithm applied to the evaluation of the dark circle section according to the second embodiment of the present invention, and FIG. 29 is an evaluation result of the dark circle section according to the embodiment of the present invention. Exemplary drawing for explaining a method of displaying.
영역 검출부(124)는 눈 영역에서 복수개의 지점을 검출할 수 있다. 예를 들어, 영역 검출부(124)는 눈 영역에서 제1 내지 제4 지점(1101 내지 1104)을 검출할 수 있고, 제1 지점(1101)은 눈의 바깥쪽 끝 지점이고, 제2 지점(1102)은 눈의 안쪽 끝 지점이고, 제3 지점(1103)은 눈의 위쪽 끝 지점이고, 제4 제점(1104)은 눈의 아래쪽 끝 지점일 수 있다.The area detector 124 may detect a plurality of points in the eye area. For example, the area detector 124 may detect the first to fourth points 1101 to 1104 in the eye area, the first point 1101 is an outer end point of the eye, and the second point 1102. ) May be an inner end point of the eye, the third point 1103 may be an upper end point of the eye, and the fourth point 1104 may be a lower end point of the eye.
메이크업 분석부(121)는 제1 및 제2 지점(1101, 1102)을 연결하는 직선 거리를 측정하여 눈의 수평 거리(l1)를 산출하고, 제3 및 제4 지점(1103, 1104)을 연결하는 직선 거리를 측정하여 눈의 수직 거리(l2)를 산출할 수 있다. The makeup analyzer 121 calculates a horizontal distance l1 of the eye by measuring a straight line connecting the first and second points 1101 and 1102 and connects the third and fourth points 1103 and 1104. The vertical distance l2 of the eye may be calculated by measuring a straight line distance.
메이크업 분석부(121)는 제1 내지 제4 지점(1101 내지 1104)과 수평 거리(l1) 및 수직 거리(l2)에 기초하여 기준선(1110)을 획득할 수 있다. 예를 들어, 메이크업 분석부(121)는 제3 지점(1103)으로부터 수직 거리(l2)만큼 아래로 이격된 위치에 수평 거리(l1)의 소정 비율에 해당하는 길이를 갖는 기준선(1110)을 획득할 수 있다. 기준선(1110)의 길이는 수평 거리(l1)의 80%일 수 있으나, 이는 예시적인 것에 불과하다.The makeup analyzer 121 may acquire the reference line 1110 based on the first to fourth points 1101 to 1104, the horizontal distance l1, and the vertical distance l2. For example, the makeup analyzer 121 acquires a reference line 1110 having a length corresponding to a predetermined ratio of the horizontal distance l1 at a position spaced downward from the third point 1103 by a vertical distance l2. can do. The length of the reference line 1110 may be 80% of the horizontal distance l1, but this is merely exemplary.
도 28을 참고하면, 메이크업 분석부(121)는 기준선(1110)의 좌측 1/3 영역(1111)의 RGB 값 중 최대값을 추출하고, 기준선(1110)의 우측 1/3 영역(1112)의 RGB 값 중 최대값을 추출하고, 기준선(1110)의 중앙 1/2 영역(1113)의 RGB 값 중 최소값을 추출할 수 있다.Referring to FIG. 28, the makeup analyzer 121 extracts the maximum value of the RGB values of the left third region 1111 of the reference line 1110, and extracts the maximum value from the right third region 1112 of the reference line 1110. The maximum value of the RGB values may be extracted, and the minimum value of the RGB values of the center 1/2 region 1113 of the reference line 1110 may be extracted.
메이크업 분석부(121)는 추출된 두 개의 최대값 중 작은 값을 획득하고, 획득된 작은 값과 앞에서 추출된 최소 값의 차이를 산출할 수 있다. 메이크업 분석부(121)는 산출된 차이에 기초하여 다크서클의 진한 정도를 평가할 수 있다.The makeup analyzer 121 may acquire a small value among the two extracted maximum values and calculate a difference between the obtained small value and the minimum value extracted previously. The makeup analyzer 121 may evaluate the darkness of the dark circle based on the calculated difference.
본 발명에 따르면, 제1 내지 제4 지점(1101 내지 1104)와 눈의 거리를 통해 다크서클 대상 영역을 검출할 수 있고, 다크서클 대상 영역에서 양측의 RGB 값을 통해 주변 피부색을 획득하고, 가운데의 RGB 값을 통해 눈 아래에서 가장 진한 부분의 색을 획득하여, 다크서클의 진한 정도를 보다 정밀하게 측정 가능한 이점이 있다. 즉, 단순히 다크서클을 측정하는 것이 아니라, 다크서클이 주변 피부색과 유사하게 잘 커버되도록 메이크업 하였는지 평가 가능한 이점이 있다.According to the present invention, the dark circle target region can be detected through the distance between the first to fourth points 1101 to 1104 and the eye, and the peripheral skin color is obtained through the RGB values of both sides in the dark circle target region, By obtaining the color of the darkest part under the eyes through the RGB value of, there is an advantage that can measure the darkness of the dark circle more precisely. In other words, rather than simply measuring the dark circle, there is an advantage that can be evaluated whether the make-up is covered so that the dark circle is similar to the surrounding skin color.
다음으로, 도 30 내지 도 34은 본 발명의 제2 실시 예에 따른 색상조화 부문 평가시 적용되는 평가 알고리즘을 설명하기 위한 도면이고, 도 35는 본 발명의 제2 실시 예에 따른 색상조화 부문의 평가결과를 표시하는 방법을 설명하기 위한 예시 도면이다.Next, FIGS. 30 to 34 are diagrams for describing an evaluation algorithm applied when evaluating the color matching section according to the second embodiment of the present invention, and FIG. 35 is a view illustrating the color matching section according to the second embodiment of the present invention. It is an example figure for demonstrating the method of displaying an evaluation result.
먼저, 제어부(130)는 피부색을 추출하도록 제어할 수 있다. 구체적으로, 도 30(a)를 참조하면, 영역 검출부(124)는 이미지에 포함된 얼굴에서 코 영역(3001)을 검출할 수 있다. 특히, 영역 검출부(124)는 콧등 영역을 검출할 수 있다. 메이크업 분석부(122)는 코 영역(3001)에 포함된 복수의 지점들에 대응하는 복수의 RGB 색상값을 추출하고, 추출된 RGB 색상값들의 평균값을 산출할 수 있다. 메이크업 분석부(122)는 산출된 RGB 평균값에 대응하는 색상을 피부색으로 추출할 수 있다.First, the controller 130 may control to extract the skin color. Specifically, referring to FIG. 30A, the area detector 124 may detect the nose area 3001 from the face included in the image. In particular, the area detector 124 may detect a nasal region. The makeup analyzer 122 may extract a plurality of RGB color values corresponding to the plurality of points included in the nose area 3001 and calculate an average value of the extracted RGB color values. The makeup analyzer 122 may extract a color corresponding to the calculated RGB average value as the skin color.
메이크업 분석부(122)는 추출된 피부색을 Lab 색 공간에 분포시켜, 추출된 피부색과 가장 가까운 색을 검출하고, 검출된 색을 피부색으로 결정할 수 있다. 도 30(b)에 도시된 바와 같이, 메이크업 DB 관리부(110)는 복수개의 대표 피부 색상을 저장하고 있고, 메이크업 분석부(122)는 검출된 피부색과 가장 가까운 색을 저장된 대표 색상에서 획득할 수 있고, 획득된 색상을 피부색으로 결정할 수 있다. 예를 들어, 도 30에서 메이크업 분석부(122)는 s5를 피부색으로 결정할 수 있다.The makeup analyzer 122 may distribute the extracted skin color in the Lab color space, detect a color closest to the extracted skin color, and determine the detected color as the skin color. As shown in FIG. 30 (b), the makeup DB manager 110 stores a plurality of representative skin colors, and the makeup analyzer 122 may acquire a color closest to the detected skin color from the stored representative colors. And the acquired color can be determined as the skin color. For example, in FIG. 30, the makeup analyzer 122 may determine s5 as the skin color.
다음으로, 제어부(130)는 입술색을 추출하도록 제어할 수 있다. 구체적으로, 도 31(a)를 참조하면, 영역 검출부(124)는 이미지에 포함된 얼굴에서 입술 영역(3101)을 검출할 수 있다. 특히, 영역 검출부(124)는 아랫 입술 영역을 검출할 수 있다. 메이크업 분석부(122)는 입술 영역(3101)에 포함된 복수의 지점들에 대응하는 복수의 RGB 색상값을 추출하고, 추출된 RGB 색상값들의 평균을 산출할 수 있다. 메이크업 분석부(122)는 산출된 RGB 평균값에 대응하는 색상을 입술색으로 추출할 수 있다.Next, the controller 130 may control to extract the lip color. In detail, referring to FIG. 31A, the area detector 124 may detect the lip area 3101 on the face included in the image. In particular, the area detector 124 may detect the lower lip area. The makeup analyzer 122 may extract a plurality of RGB color values corresponding to the plurality of points included in the lip area 3101 and calculate an average of the extracted RGB color values. The makeup analyzer 122 may extract a color corresponding to the calculated RGB average value as the lip color.
메이크업 분석부(122)는 추출된 입술색을 Lab 색 공간에 분포하여 추출된 입술색과 가장 가까운 색을 검출하고, 검출된 색을 입술색으로 결정할 수 있다. 도 31(b)에 도시된 바와 같이, 메이크업 DB 관리부(110)는 복수개의 대표 입술 색상을 저장하고 있고, 메이크업 분석부(122)는 복수개의 대표 입술 색상 중 검출된 입술색과 가장 가까운 색상을 획득할 수 있고, 획득된 색상을 입술색으로 결정할 수 있다. 예를 들어, 도 31에서 메이크업 분석부(122)는 l7을 입술색으로 결정할 수 있다.The makeup analyzer 122 may detect the color closest to the extracted lip color by distributing the extracted lip color in the Lab color space, and determine the detected color as the lip color. As shown in FIG. 31B, the makeup DB manager 110 stores a plurality of representative lip colors, and the makeup analyzer 122 selects a color closest to the detected lip color among the plurality of representative lip colors. It may be obtained, and the obtained color may be determined as the lip color. For example, in FIG. 31, the makeup analyzer 122 may determine l7 as a lip color.
다음으로, 제어부(130)는 블러셔색을 추출하도록 제어할 수 있다. 구체적으로, 도 32(a)를 참조하면, 영역 검출부(124)는 이미지에 포함된 얼굴에서 볼 영역(3201)을 검출할 수 있다. 볼 영역(3201)은 왼쪽 볼 영역과 오른쪽 볼 영역을 모두 포함할 수 있다.Next, the controller 130 may control to extract the bluish color. In detail, referring to FIG. 32A, the area detector 124 may detect the area 3201 viewed from the face included in the image. The ball area 3201 may include both a left ball area and a right ball area.
메이크업 분석부(122)는 블러셔색을 추출하는 경우 장애 제거 동작을 수행할 수 있다. 여기서, 장애 제거 동작은 머리카락 등에 의해 볼 영역이 가려져 블러셔 색상이 잘못 결정되는 경우를 최소화하기 위한 동작일 수 있다. 메이크업 분석부(122)는 장애 제거 동작을 수행하는 경우 이미지를 gray 이미지로 변환한 후 영상의 값이 소정 기준값 보다 작은 영역은 제거할 수 있고, 예를 들어 소정 기준값은 0.35일 수 있으나 이는 예시로 든 것에 불과하므로 이에 제한되지 않는다. 메이크업 분석부(122)는 볼 영역(3201) 중 제거된 영역을 제외한 나머지 영역에 대응하는 복수의 RGB 색상값을 추출하고, 추출된 RGB 색상값들의 평균값을 산출할 수 있다. 메이크업 분석부(122)는 산출된 RGB 평균값에 대응하는 색상을 블러셔색으로 추출할 수 있다.The makeup analyzer 122 may perform an obstacle removal operation when extracting a bluish color. Here, the obstacle removing operation may be an operation for minimizing the case where the ball region is covered by the hair and the like, and the bluish color is incorrectly determined. The makeup analyzer 122 may remove an area whose image value is smaller than a predetermined reference value after converting the image to a gray image when performing the obstacle removing operation. For example, the predetermined reference value may be 0.35. It is not so limited to just everything. The makeup analyzer 122 may extract a plurality of RGB color values corresponding to the remaining areas except the removed area of the ball area 3201, and calculate an average value of the extracted RGB color values. The makeup analyzer 122 may extract a color corresponding to the calculated RGB average value as a bluish color.
메이크업 분석부(122)는 추출된 블러셔색을 Lab 색 공간에 분포하여 추출된 블러셔색과 가장 가까운 색을 검출하고, 검출된 색을 블러셔색으로 결정할 수 있다. 도 32(b)에 도시된 바와 같이, 메이크업 DB 관리부(110)는 복수개의 대표 블러셔 색상을 저장하고 있고, 메이크업 분석부(122)는 복수개의 대표 블러셔 색상 중 검출된 블러셔색과 가장 가까운 색상을 획득할 수 있고, 획득된 색상을 블러셔색으로 결정할 수 있다. 예를 들어, 도 32에서 메이크업 분석부(122)는 b8을 블러셔색으로 결정할 수 있다.The makeup analyzer 122 may detect the color closest to the extracted bluish color by distributing the extracted blusher color in the Lab color space, and determine the detected color as the bluish color. As shown in FIG. 32 (b), the makeup DB manager 110 stores a plurality of representative blusher colors, and the makeup analyzer 122 selects a color closest to the detected blusher color among the plurality of representative blusher colors. It may acquire, and determine the obtained color as a bluish color. For example, in FIG. 32, the makeup analyzer 122 may determine b8 as a bluish color.
한편, 메이크업 분석부(122)는 볼 영역(3201) 중 Lab 색 공간에서 a 값이 큰 값을 대표 블러셔색으로 결정할 수도 있다.On the other hand, the makeup analysis unit 122 may determine a large value of a in the Lab color space among the ball region 3201 as the representative bluish color.
다음으로, 제어부(130)는 아이섀도우색을 추출하도록 제어할 수 있다.Next, the controller 130 may control to extract the eye shadow color.
구체적으로, 도 33을 참조하면, 영역 검출부(124)는 이미지에 포함된 얼굴에서 눈 위 영역(3301)을 검출할 수 있다. 눈 위 영역(3301)은 왼쪽 눈 위 영역과 오른쪽 눈 위 영역을 모두 포함할 수 있다.In detail, referring to FIG. 33, the area detector 124 may detect an area 3301 above the eye from the face included in the image. The upper eye area 3301 may include both an upper left eye area and an upper right eye area.
메이크업 분석부(122)는 아이섀도우색을 추출하는 경우 앞에서 설명한 바와 같은 장애 제거 동작을 수행할 수 있다. 메이크업 분석부(122)는 눈 위 영역(3301) 중 장애 제거 동작을 통해 제거된 영역을 제외한 나머지 영역에 대응하는 복수의 RGB 색상값을 추출하고, 추출된 RGB 색상값들의 평균을 산출할 수 있다. 메이크업 분석부(122)는 산출된 RGB 평균값에 대응하는 색상을 아이섀도우색으로 추출할 수 있다. 특히, 메이크업 분석부(122)는 왼쪽 아이섀도우색과 오른쪽 아이섀도우색을 각각 추출할 수 있다. 메이크업 분석부(122)는 추출된 아이섀도우색에 기초하여 대표 섀도우색을 결정할 수 있다. When the makeup analyzer 122 extracts the eye shadow color, the makeup analyzer 122 may perform the obstacle removing operation as described above. The makeup analyzer 122 may extract a plurality of RGB color values corresponding to the remaining areas except for the region removed through the obstacle removing operation in the upper area 3301, and calculate an average of the extracted RGB color values. . The makeup analyzer 122 may extract a color corresponding to the calculated RGB average value as an eye shadow color. In particular, the makeup analyzer 122 may extract the left eye shadow color and the right eye shadow color, respectively. The makeup analyzer 122 may determine the representative shadow color based on the extracted eye shadow color.
일 실시 예에 따르면, 메이크업 분석부(122)는 메이크업 주제에 따라 대표 섀도우 색을 상이한 방법으로 결정할 수 있다. 메이크업 분석부(122)는 메이크업 주제가 내추럴(natural) 또는 러블리(lovely)인 경우 추출된 왼쪽 아이섀도우색과 오른쪽 아이섀도우색 중 Lab 색 공간에서 a 값이 큰 값을 대표 섀도우색으로 결정할 수 있다. 메이크업 분석부(122)는 메이크업 주제가 스모키(smoky)인 경우 추출된 왼쪽 아이섀도우색과 오른쪽 아이섀도우색 중 Lab 색 공간에서 L 값이 작은 값을 대표 섀도우색으로 결정할 수 있다. 이는, 메이크업 주제에 따라 추천되는 아이섀도우색이 다르기 때문이며, 메이크업 주제에 따라 상이한 방법으로 대표 섀도우색을 결정함으로써, 메이크업이 잘 되었는지 뿐만 아니라 주제에 맞게 잘 되었는지까지 평가 가능한 이점이 있다.According to an embodiment of the present disclosure, the makeup analyzer 122 may determine the representative shadow color in different ways according to the makeup theme. When the makeup subject is natural or lovely, the makeup analyzer 122 may determine, as a representative shadow color, a value having a larger value in Lab color space among the left eye shadow color and the right eye shadow color extracted. The makeup analyzer 122 may determine, as a representative shadow color, a value having a small L value in a lab color space among the left eye shadow color and the right eye shadow color extracted when the makeup subject is smokey. This is because the recommended eye shadow color is different according to the makeup theme, and by determining the representative shadow color in different ways according to the makeup theme, there is an advantage that can be evaluated whether the makeup is well suited to the theme.
다음으로, 결정된 피부색, 입술색, 블러셔색 및 섀도우색으로 색상 조화를 평가하는 방법을 설명한다.Next, a method of evaluating color harmony with the determined skin color, lip color, blusher color, and shadow color will be described.
메이크업 DB 관리부(110)는 복수개의 제1 샘플 색상과 복수개의 제2 샘플 색상을 포함하며, 복수개의 제1 샘플 색상 중 어느 하나와 복수개의 제2 샘플 색상 중 어느 하나로 구성되는 한 쌍의 샘플 색상에 점수 데이터가 맵핑되는 점수 테이블을 저장하고 있을 수 있다.The makeup DB manager 110 includes a plurality of first sample colors and a plurality of second sample colors, and a pair of sample colors configured by any one of the plurality of first sample colors and the plurality of second sample colors. The score table to which the score data is mapped may be stored.
예를 들어, 도 34(a)를 참조하면, 메이크업 DB 관리부(110)는 복수개의 피부색과 복수개의 입술색을 맵핑하고 그에 대응하는 점수 나타내는 피부-입술 점수 테이블을 저장하고 있을 수 있다. 메이크업 분석부(122)는 결정된 피부색과 결정된 입술색을 피부-입술 점수 테이블에서 검색하고, 검색된 피부색과 입술색에 맵핑된 점수를 획득할 수 있고, 획득된 점수를 피부&입술 조화 점수로 판단할 수 있다.For example, referring to FIG. 34A, the makeup DB manager 110 may map a plurality of skin colors and a plurality of lip colors and store a skin-lip score table indicating a score corresponding thereto. The makeup analyzer 122 may search the determined skin color and the determined lip color in the skin-lip score table, obtain a score mapped to the searched skin color and the lip color, and determine the obtained score as the skin & lips harmony score. Can be.
마찬가지로, 도 34(b)를 참조하면, 메이크업 DB 관리부(110)는 복수개의 피부색과 복수개의 블러셔색을 맵핑하고 그에 대응하는 점수 나타내는 피부-블러셔 점수 테이블을 저장하고 있을 수 있다. 메이크업 분석부(122)는 피부-블러셔 점수 테이블에서 결정된 피부색과 결정된 블러셔색을 검색하고, 검색된 피부색과 블러셔색에 맵핑된 점수를 획득할 수 있고, 획득된 점수를 피부&블러셔 조화 점수로 판단할 수 있다.Similarly, referring to FIG. 34B, the makeup DB manager 110 may store a skin-blusher score table that maps a plurality of skin colors and a plurality of blush colors and indicates corresponding scores. The makeup analyzer 122 may search the determined skin color and the determined blush color in the skin-blusher score table, obtain a score mapped to the searched skin color and the blush color, and determine the obtained score as the skin & blusher harmony score. Can be.
다음으로, 피부&아이섀도우 조화 점수를 판단하는 방법을 설명한다. 메이크업 분석부(122)는 도 33을 통해 설명한 방법으로 결정된 대표 섀도우색과 피부 색의 차를 산출할 수 있다. 구체적으로, 메이크업 분석부(122)는 대표 섀도우색과 피부 색의 a값의 차와 L값의 차를 각각 산출하고, 산출된 a값의 차와 L값의 차에 기초하여 점수를 판단할 수 있다. 마찬가지로, 메이크업 분석부(122)는 메이크업 주제에 따라 점수를 상이하게 판단할 수 있다.Next, a method of determining the skin & eye shadow harmony score will be described. The makeup analyzer 122 may calculate a difference between the representative shadow color and the skin color determined by the method described with reference to FIG. 33. In detail, the makeup analyzer 122 may calculate a difference between the a value of the representative shadow color and the skin color and a difference between the L value, and determine the score based on the calculated difference between the a value and the L value. have. Similarly, the makeup analyzer 122 may determine the score differently according to the makeup subject.
이와 같이, 메이크업 분석부(122)는 아이섀도우의 경우 피부와의 색상 조화 분석을 입술색 및 블러셔의 경우와는 다르게 판단할 수 있다. 이는, 블러셔 또는 입술은 메이크업 주제가 상이하더라도 색상 계열이 유사한 경향이 있으나, 아이섀도우의 경우 메이크업 주제에 따라 색상 계열이 완전히 달라질 수 있기 때문이다. 이에 따라, 메이크업의 색상조화 부문을 판단시 보다 정밀하게 색상조화를 평가 가능한 이점이 있다.As such, the makeup analyzer 122 may determine the color matching analysis with the skin in the case of the eye shadow differently from the case of the lip color and the blusher. This is because the blusher or the lips tend to have a similar color series even though the makeup subjects are different, but the eye shadow may be completely different according to the makeup subject. Accordingly, when judging the color harmony section of the makeup, there is an advantage that the color harmony can be more accurately evaluated.
한편, 메이크업 분석부(122)는 피부색, 입술색, 블러셔색 및 섀도우색 중 적어도 하나 이상을 추출할 때, 색상이 추출되지 않는 피부색의 경우 0점으로 판단할 수 있다. On the other hand, when the makeup analysis unit 122 extracts at least one or more of the skin color, lip color, blush color and shadow color, it may be determined as a zero point in the case of the skin color is not extracted.
무선 통신부(140)는 색상조화 부문을 평가한 후 평가 결과 신호를 이동 단말기(10)로 전송할 수 있고, 이동 단말기(10)의 디스플레이부(14)는 평가결과를 표시할 수 있다. The wireless communication unit 140 may transmit the evaluation result signal to the mobile terminal 10 after evaluating the color matching section, and the display unit 14 of the mobile terminal 10 may display the evaluation result.
도 30은 색상조화 부문에 대한 메이크업 평가결과를 나타내는 예시 도면일 수 있다. 디스플레이부(14)는 피부&입술 조화, 피부&블러셔 조화 및 피부&아이섀도우 조화에 대한 평가 결과를 표시할 수 있다. 그러나, 도 30에 도시된 평가결과를 나타내는 방법은 예시적인 것에 불과하다.FIG. 30 may be an exemplary diagram illustrating a makeup evaluation result for a color matching unit. The display unit 14 may display evaluation results for skin & lip harmony, skin & blusher harmony and skin & eye shadow harmony. However, the method of showing the evaluation result shown in FIG. 30 is merely exemplary.
다음으로, 도 36은 본 발명의 제2 실시 예에 따른 입술 부문 평가시 적용되는 평가 알고리즘을 설명하기 위한 도면이고, 도 37은 본 발명의 제2 실시 예에 따른 입술 부문의 평가결과를 표시하는 방법을 설명하기 위한 예시 도면이다.Next, FIG. 36 is a diagram for describing an evaluation algorithm applied to the evaluation of the lips section according to the second embodiment of the present invention. FIG. 37 is a view illustrating an evaluation result of the lips section according to the second embodiment of the present invention. Exemplary drawing for explaining the method.
영역 검출부(124)는 이미지에 포함된 얼굴에서 입술 영역(2001)을 검출할 수 있다. 검출된 입술 영역은 도 36(a)에 도시된 바와 같을 수 있다.The area detector 124 may detect the lip area 2001 from the face included in the image. The detected lip region may be as shown in FIG. 36 (a).
메이크업 평가부(120)는 입술 영역(2001)의 메이크업과 관련하여, 입술 균일성과 입술 건조도를 평가할 수 있다. 여기서, 입술 균일성은 입술 색상의 균일성을 나타내는 것으로 입술에 메이크업이 고르게 잘 되었는지를 나타내는 항목일 수 있다. 입술 건조도는 입술이 촉촉한 상태로 메이크업이 잘 되었는지를 나타내는 항목일 수 있다.The makeup evaluator 120 may evaluate lip uniformity and lip dryness in relation to the makeup of the lip region 2001. Here, the lip uniformity may indicate a uniformity of the lip color and may be an item indicating whether makeup is evenly performed on the lips. Lip dryness may be an item indicating whether makeup is well performed while the lips are moist.
먼저, 메이크업 평가부(120)가 입술 균일성을 평가하는 방법을 설명한다. 메이크업 분석부(122)는 검출된 입술 영역(2001)을 Lab 색 공간으로 변환하고, L 공간에 있는 영상에서 임계값 설정을 통해 입술 영역 중 반사 영역의 영상을 획득할 수 있다. 예를 들어, 메이크업 분석부(122)는 L 공간에 있는 영상에서 L 값이 기 설정된 범위에 포함되는 픽셀들로 구성되는 영역을 검출하고, 검출된 영역을 반사 영역으로 결정할 수 있다.First, the makeup evaluation unit 120 will be described how to evaluate the uniformity of the lips. The makeup analyzer 122 may convert the detected lip region 2001 into a lab color space, and acquire an image of the reflection region of the lip region by setting a threshold value in the image in the L space. For example, the makeup analyzer 122 may detect a region composed of pixels in which an L value is included in a preset range in an image in the L space, and determine the detected region as a reflection region.
도 36(b)는 메이크업 분석부(122)에 의해 결정된 반사 영역을 나타내는 예시 도면일 수 있다. 단계 1에서 반사 영역(2002)의 크기가 가장 크고, 단계 5로 갈수록 반사 영역(2002)의 크기가 감소되도록 도시되어 있다.36 (b) may be an exemplary diagram illustrating a reflection area determined by the makeup analyzer 122. It is shown that the size of the reflective area 2002 is the largest in step 1 and the size of the reflective area 2002 decreases toward step 5.
메이크업 분석부(122)는 도 36(a)에 도시된 바와 같은 이미지에서 입술 영역의 크기를 산출하고, 도 36(b)에 도시된 바와 같은 이미지에서 검출된 반사 영역의 크기를 산출할 수 있다.The makeup analyzer 122 may calculate the size of the lip area in the image as shown in FIG. 36 (a) and calculate the size of the reflection area detected in the image as shown in FIG. 36 (b). .
메이크업 분석부(122)는 입술 영역의 크기 대비 반사 영역의 크기의 비율을 산출할 수 있다. 메이크업 분석부(122)는 산출된 크기의 비율에 기초하여 입술의 균일성을 평가할 수 있다. 즉, 메이크업 분석부(122)는 단계 1에 도시된 입술의 경우 입술의 균일성을 높게 평가하고, 단계 5로 갈수록 입술의 균일성을 낮게 평가할 수 있다.The makeup analyzer 122 may calculate a ratio of the size of the reflective area to the size of the lip area. The makeup analyzer 122 may evaluate the uniformity of the lips based on the ratio of the calculated size. That is, the makeup analyzer 122 may evaluate the uniformity of the lips in the case of the lips shown in step 1, and may evaluate the uniformity of the lips toward the step 5.
유사하게, 메이크업 평가부(120)는 입술 건조도를 평가할 수 있다.Similarly, the makeup evaluator 120 may evaluate a lip dryness.
메이크업 분석부(122)는 입술 영역을 검출하고, 도 36(a)에 도시된 바와 같은 입술 영역 이미지(2001)를 획득할 수 있다.The makeup analyzer 122 may detect a lip region and acquire an lip region image 2001 as illustrated in FIG. 36A.
메이크업 분석부(122)는 획득된 입술 영역 이미지에 하이 패스 필터(high pass filter)를 적용한 필터링 이미지를 획득할 수 있다. 메이크업 분석부(122)는 필터링 이미지 중 R 공간에 있는 이미지에서 임계값 설정을 통해 입술 영역의 세로 주름을 나타내는 마스크 이미지를 획득할 수 있다. 즉, 메이크업 분석부(122)는 입술 영역 중 R 값이 기 설정된 범위체 포함되는 필셀로 구성되는 영역을 검출하고, 검출된 영역을 나타내는 주름 영역(2002)으로 결정할 수 잇다. 주름 영역(2002)은 도 36(b)에 도시된 예시와 유사하게 단계적으로 획득될 수 있다.The makeup analyzer 122 may acquire a filtering image in which a high pass filter is applied to the acquired lip region image. The makeup analyzer 122 may acquire a mask image representing the vertical wrinkles of the lip region by setting a threshold value in the image in the R space among the filtered images. That is, the makeup analyzer 122 may detect a region composed of a pill cell having an R range among the lip regions, and determine the wrinkle region 2002 indicating the detected region. The wrinkled area 2002 may be obtained in stages, similar to the example shown in FIG. 36 (b).
메이크업 분석부(122)는 도 36(a)에 도시된 바와 같은 이미지에서 입술 영역의 크기를 산출하고, 도 36(b)에 도시된 바와 같은 이미지에서 주름 영역(2002)의 크기를 산출할 수 있다.The makeup analyzer 122 may calculate the size of the lip area in the image as shown in FIG. 36 (a) and calculate the size of the wrinkle area 2002 in the image as shown in FIG. 36 (b). have.
메이크업 분석부(122)는 입술 영역의 크기 대비 주름 영역의 크기의 비율을 산출할 수 있다. 메이크업 분석부(122)는 산출된 크기의 비율에 기초하여 입술의 건조도를 평가할 수 있다. 즉, 메이크업 분석부(122)는 단계 1에 도시된 입술의 경우 크기의 비율이 크게 산출되며 입술의 건조도를 높게 평가하고, 단계 5로 갈수록 크기의 비율이 낮게 산출되며 입술의 건조도를 낮게 평가할 수 있다.The makeup analyzer 122 may calculate a ratio of the size of the wrinkle area to the size of the lip area. The makeup analyzer 122 may evaluate the dryness of the lips based on the ratio of the calculated size. That is, the makeup analysis unit 122 calculates a large ratio of the size of the lips in the case of the lips shown in step 1, and evaluates the dryness of the lips high. Can be evaluated
도 36에서 반사 영역(2002)과 주름 영역(2002)이 동일한 것처럼 설명하였으나, 이는 설명의 편의를 위해 하나의 예시를 든 것에 불과하며, 하나의 이미지에서 반사 영역(2002)과 주름 영역(2002)은 상이하게 검출될 수 있고, 이에 따라 입술 균일성과 일술 건조도는 상이하게 평가될 수 있다.In FIG. 36, the reflective region 2002 and the wrinkled region 2002 are described as if they are the same. However, this is merely an example for convenience of description, and the reflective region 2002 and the wrinkled region 2002 in one image are illustrated. Can be detected differently, so that the lip uniformity and dryness can be evaluated differently.
그러나, 앞에서 설명한 입술 부문 평가 방법은 예시로 든 것에 불과하므로 이에 제한될 필요는 없다.However, the method of evaluating the lip section described above is merely an example and need not be limited thereto.
무선 통신부(140)는 입술 부문을 평가한 후 평가 결과 신호를 이동 단말기(10)로 전송할 수 있고, 이동 단말기(10)의 디스플레이부(14)는 평가결과를 표시할 수 있다. The wireless communication unit 140 may evaluate the lip section and then transmit the evaluation result signal to the mobile terminal 10, and the display unit 14 of the mobile terminal 10 may display the evaluation result.
도 37은 입술 부문에 대한 메이크업 평가결과를 나타내는 예시 도면일 수 있다. 디스플레이부(14)는 입술 균일성에 대한 평가 결과를 표시할 수 있다. 디스플레이부(14)는 입술 균일성에 대응하는 점수와 입술 건조도에 대응하는 점수를 각각 표시할 수 있다. 그러나, 도 37에 도시된 평가결과를 나타내는 방법은 예시적인 것에 불과하다.FIG. 37 may be an exemplary diagram illustrating a result of makeup evaluation on a lip section. FIG. The display unit 14 may display an evaluation result of the lip uniformity. The display unit 14 may display a score corresponding to lip uniformity and a score corresponding to lip dryness, respectively. However, the method of showing the evaluation result shown in FIG. 37 is merely exemplary.
다음으로, 도 38은 본 발명의 제2 실시 예에 따른 잡티 부문 평가시 적용되는 평가 알고리즘을 설명하기 위한 도면이고, 도 3는은 본 발명의 제2 실시 예에 따른 잡티 부문의 평가결과를 표시하는 방법을 설명하기 위한 예시 도면이다.Next, FIG. 38 is a view for explaining an evaluation algorithm applied when evaluating a blemish section according to a second embodiment of the present invention, and FIG. 3 shows an evaluation result of the blemish section according to a second embodiment of the present invention. It is an example figure for demonstrating the method.
영상 검출부(121)는 이미지를 획득할 수 있고, 획득한 이미지에 포함된 얼굴 영역을 검출할 수 있다. 예를 들어, 영상 검출부(121)는 도 38(a)에 도시된 바와 같이 얼굴 영역을 검출할 수 있다.The image detector 121 may acquire an image and detect a face region included in the acquired image. For example, the image detector 121 may detect a face region as shown in FIG. 38 (a).
영상 검출부(121)는 검출된 얼굴 영역에서 볼 영역(2201)을 검출할 수 있고, 볼 영역(2201)은 왼쪽 볼 영역과 오른쪽 볼 영역을 포함할 수 있다. 예를 들어, 영상 검출부(121)는 도 38(b)에 도시된 바와 같이 얼굴 영역에서 볼 영역(2201)을 검출할 수 있다.The image detector 121 may detect the ball area 2201 in the detected face area, and the ball area 2201 may include a left ball area and a right ball area. For example, the image detector 121 may detect the ball area 2201 in the face area as shown in FIG. 38 (b).
영상 검출부(121)는 검출된 얼굴 영역에서 턱 영역(2202)을 검출할 수 있다. 메이크업 분석부(122)는 턱 영역(2202)에 포함된 모든 픽셀을 Lab 공간으로 변환하고, 변환된 Lab 공간에서 L, a 및 b 각각의 평균값을 산출하여 피부의 평균 Lab 값을 산출할 수 있다. The image detector 121 may detect the jaw region 2202 in the detected face region. The makeup analyzer 122 may convert all pixels included in the jaw region 2202 into a lab space and calculate average values of L, a, and b in the converted lab space to calculate an average Lab value of the skin. .
메이크업 분석부(122)는 평균 Lab 값을 RGB 값으로 변환하여 피부의 RGB 평균값을 산출할 수 있다.The makeup analyzer 122 may calculate an RGB average value of the skin by converting an average Lab value into an RGB value.
다음으로, 메이크업 분석부(122)는 잡티 대상 영역을 설정할 수 있다. 구체적으로, 메이크업 분석부(122)는 얼굴 영역을 포함하는 이미지를 Lab 색공간으로 변환하고, 변환된 Lab와 앞에서 산출한 피부의 평균 Lab 값의 차에 대응하는 갭 이미지를 획득할 수 있다.Next, the makeup analyzer 122 may set the blemish target area. In detail, the makeup analyzer 122 may convert an image including a face region into a lab color space, and acquire a gap image corresponding to a difference between an average Lab value of the transformed Lab and the skin previously calculated.
메이크업 분석부(122)는 갭 이미지에 포함된 복수의 픽셀들 각각을 기준으로 옆(좌측 또는 우측)에 위치한 픽셀간의 차에 대응하는 좌우 갭 이미지를 획득하고, 갭 이미지에 포함된 복수의 픽셀들 각각을 기준으로 위 또는 아래에 위치한 픽셀 간의 차에 대응하는 상하 갭 이미지를 획득할 수 있다.The makeup analyzer 122 acquires a left and right gap image corresponding to a difference between pixels positioned next to each other (left or right) based on each of the plurality of pixels included in the gap image, and the plurality of pixels included in the gap image. An upper and lower gap image corresponding to a difference between pixels positioned above or below each other may be obtained based on each reference.
메이크업 분석부(122)는 좌우 갭 이미지의 픽셀 값을 제곱과 상하 갭 이비지의 픽셀 값을 제곱의 합에 대응하는 색상차 이미지를 획득할 수 있다. 메이크업 분석부(122)는 색상차 이미지에서 픽셀값이 기 설정된 임계값 보다 큰 지점들로 구성되는 대상 영역을 획득할 수 있다.The makeup analyzer 122 may acquire a color difference image corresponding to a sum of squares of pixel values of the left and right gap images and squares of pixel values of the upper and lower gap images. The makeup analyzer 122 may acquire a target area including points at which pixel values are larger than a preset threshold value in the color difference image.
메이크업 분석부(122)는 대상 영역에 모폴로지 연산(morphological operation)을 수행하여 클러스터링(clustering)된 영역을 도 38(d)에 도시된 바와 같이 잡티 대상 영역(2203)으로 설정할 수 있다. The makeup analyzer 122 may set the clustered region as the blemish target region 2203 by performing a morphological operation on the target region, as shown in FIG. 38 (d).
메이크업 분석부(122)는 잡티 대상 영역(2203)에서 픽셀값이 기 설정된 임계값 이상 차이나는 이질점(2213, 2223, 2233)을 획득할 수 있다. 특히, 도 38(e)에 도시된 바와 같이, 메이크업 분석부(122)는 볼 영역(2201) 내부에서 획득되는 이질점(2213, 2223)을 잡티로 인식할 수 있다.The makeup analyzer 122 may acquire heterogeneous points 2213, 2223, and 2233 in which the pixel values differ from the blemish target area 2203 by more than a preset threshold. In particular, as shown in FIG. 38E, the makeup analyzer 122 may recognize disparate points 2213 and 2223 acquired inside the ball area 2201 as blemishes.
메이크업 분석부(122)는 볼 영역(2201)의 크기와, 잡티로 인식된 이질점(2213, 2223)의 크기를 각각 검출하고, 볼 영역(2201)의 크기 대비 이질점(2213, 2223)의 크기 비율을 산출할 수 있다. 메이크업 분석부(122)는 산출된 크기 비율에 기초하여 피부 균일성을 평가할 수 있다. 예를 들어, 메이크업 분석부(122)는 크기 비율이 제1 기준값(예를 들어, 0%)에서 제2 기준값(예를 들어, 5%) 사이에 속하면 5점, 제2 기준값(예를 들어, 5%)에서 제3 기준값(예를 들어, 10%) 사이에 속하면 3점, 제3 기준값(예를 들어, 10%)에서 제4 기준값(예를 들어, 20%) 사이에 속하면 1점으로 결정할 수 있다.The makeup analyzer 122 detects the size of the ball area 2201 and the sizes of the heterogeneous points 2213 and 2223 recognized as blemishes, respectively, and compares the sizes of the heterogeneous points 2213 and 2223 to the size of the ball area 2201. The size ratio can be calculated. The makeup analyzer 122 may evaluate skin uniformity based on the calculated size ratio. For example, the makeup analyzer 122 may score 5 points if the size ratio falls between the first reference value (eg, 0%) and the second reference value (eg, 5%). For example, between 5%) and the third reference value (eg, 10%), 3 points, and between the third reference value (eg, 10%) and the fourth reference value (eg, 20%). You can decide by 1 point.
피부 균일성은 얼굴의 피부색이 고르게 메이크업 되었는지를 나타내는 것으로, 잡티, 점 또는 주름 등과 같은 결점이 보이지 않도록 메이크업 커버가 잘 잘되었는지를 나타내는 지표일 수 있다.Skin uniformity indicates whether the skin color of the face is evenly makeup, and may be an index indicating whether the makeup cover is well done so that defects such as blemishes, spots, or wrinkles are not seen.
무선 통신부(140)는 잡티 부문을 평가한 후 평가 결과 신호를 이동 단말기(10)로 전송할 수 있고, 이동 단말기(10)의 디스플레이부(14)는 평가결과를 표시할 수 있다. The wireless communication unit 140 may evaluate the blemish section and then transmit an evaluation result signal to the mobile terminal 10, and the display unit 14 of the mobile terminal 10 may display the evaluation result.
도 39는 잡티 부문에 대한 메이크업 평가결과를 나타내는 예시 도면일 수 있다. 디스플레이부(14)는 잡티에 대한 평가 결과를 표시할 수 있다. 그러나, 도 39에 도시된 평가결과를 나타내는 방법은 예시적인 것에 불과하다.39 may be an exemplary view illustrating a result of evaluating makeup for a blemish section. The display unit 14 may display an evaluation result for the blemishes. However, the method of showing the evaluation result shown in FIG. 39 is merely exemplary.
다시, 도 20를 설명한다.20 will be described again.
메이크업 서버(100)는 메이크업을 평가한 평가 결과 신호를 이동 단말기(10)로 전송할 수 있다(S21). The makeup server 100 may transmit an evaluation result signal of evaluating makeup to the mobile terminal 10 (S21).
이와 같이 눈썹 부문, 다크서클 부문, 색상조화 부문, 입술 부문, 잡티 부문 등 세부 평가 부문별로 평가된 점수가 합해져서 메이크업 총점이 도출될 수 있다. 실시예에 따라서는 각각의 세부 평가 부문이 총점에 기여하는 비율이 메이크업 주제별로 다르게 설정될 수도 있다. 예를 들어, 메이크업 주제가 내추럴인 경우 총점은 다크서클 부문의 점수가 60%이고, 나머지 눈썹 부문, 색상조화 부문, 입술 부문, 잡티 부문에 대한 점수가 각각 10% 비율로 반영되어 산출되고, 메이크업 주제가 스모키인 경우 총점은 색상조화 부문이 35%이고, 입술 부문이 40%이고, 눈썹 부문이 15%이고, 다크서클 부문이 5%이고, 잡티 부문이 5%인 비율로 반영되어 산출될 수 있다.In this way, the total scores of the eyebrow section, dark circle section, color harmony section, lip section, and blemish section may be combined to derive makeup total scores. According to the exemplary embodiment, the ratio of each detailed evaluation section to the total score may be set differently for each makeup subject. For example, if the make-up theme is natural, the total score is calculated based on 60% of the score of the dark circle, and 10% of the scores of the remaining eyebrow, color harmony, lip and blemishes. In the case of Smokey, the total score can be calculated by reflecting 35% of color matching, 40% of lip, 15% of eyebrows, 5% of dark circles, and 5% of blemishes.
이동 단말기(10)의 디스플레이부(14)는 수신된 평가 결과 신호에 기초하여 메이크업 평가결과를 표시할 수 있다(S23).The display unit 14 of the mobile terminal 10 may display the makeup evaluation result based on the received evaluation result signal (S23).
디스플레이부(14)는 도 40에 도시된 바와 같은 메이크업 결과를 표시할 수 있다. The display unit 14 may display the makeup result as shown in FIG. 40.
즉, 디스플레이부(14)는 메이크업 평가 부문 각각에 대한 점수와, 총점을 표시할 수 있다.That is, the display unit 14 may display a score and a total score for each makeup evaluation section.
또는, 디스플레이부(14)는 도 26, 도 29, 도 35, 도 37 및 도 39에 도시된 바와 같이 메이크업 평가결과를 표시할 수 있다. 특히, 디스플레이부(14)는 도 40에 도시된 바와 같은 복수의 평가 부문 중 어느 하나의 부문이 선택되면 선택된 부문에 대한 세부 평가 결과로 도 26, 도 29, 도 35, 도 37 또는 도 39에 도시된 바와 같은 평가 결과를 표시할 수 있다.Alternatively, the display 14 may display the makeup evaluation result as illustrated in FIGS. 26, 29, 35, 37, and 39. In particular, when any one of a plurality of evaluation sectors as shown in FIG. 40 is selected, the display unit 14 may display detailed results of the selected sectors in FIGS. 26, 29, 35, 37, or 39. Evaluation results as shown can be displayed.
한편, 앞에서는 메이크업 서버(100)가 메이크업 평가를 수행하는 것으로 설명하였으나, 이는 설명의 편의를 위해 예시로 든 것에 불과하다. 이동 단말기(10)가 이미지를 획득하여 메이크업 평가를 직접 수행할 수도 있고, 이 경우 메이크업 서버(100)로부터 메이크업 평가와 관련된 데이터를 수신할 수 있다.In the meantime, the makeup server 100 has been described as performing the makeup evaluation, but this is merely exemplified for convenience of description. The mobile terminal 10 may directly perform makeup evaluation by acquiring an image, and in this case, may receive data related to makeup evaluation from the makeup server 100.
한편, 앞에서 메이크업 평가 시스템을 도 3 내지 도 18에 따른 제1 실시 예와 도 19 내지 도 40에 따른 제2 실시 예로 구분하여 설명하였으나, 이는 설명의 편의를 위한 것일 뿐이므로 이에 제한되지 않는다. 즉, 도 3 내지 도 18을 통해 설명한 제1 실시 예에 따른 메이크업 평가 시스템 및 그의 동작 방법은 도 19 내지 도 40을 통해 설명한 제2 실시 예에 따른 메이크업 평가 시스템 및 그의 동작 방법과 조합되어 구성될 수 있다.Meanwhile, the makeup evaluation system has been described above in the first embodiment according to FIGS. 3 to 18 and the second embodiment according to FIGS. 19 to 40, but this is only for convenience of description and is not limited thereto. That is, the makeup evaluation system and its operation method according to the first embodiment described with reference to FIGS. 3 to 18 and the makeup evaluation system according to the second embodiment described with reference to FIGS. 19 to 40 and the operation method thereof may be combined. Can be.
예를 들어, 메이크업 분석부(122)는 도 6에 도시된 바와 같은 메이크업 점수 데이터와 도 21 내지 도 25를 통해 설명한 알고리즘을 함께 적용하여 메이크업 점수를 출력할 수 있다.For example, the makeup analyzer 122 may output makeup scores by applying the makeup score data as shown in FIG. 6 and the algorithm described with reference to FIGS. 21 to 25.
또한, 디스플레이부(14)는 제1 실시 예에 따른 메이크업 결과 화면과 제2 실시 예에 따른 메이크업 결과 화면을 조합하여 표시할 수 있다.In addition, the display unit 14 may display a combination of the makeup result screen according to the first embodiment and the makeup result screen according to the second embodiment.
전술한 본 발명은, 프로그램이 기록된 매체에 컴퓨터가 읽을 수 있는 코드로서 구현하는 것이 가능하다. 컴퓨터가 읽을 수 있는 매체는, 컴퓨터 시스템에 의하여 읽혀질 수 있는 데이터가 저장되는 모든 종류의 기록장치를 포함한다. 컴퓨터가 읽을 수 있는 매체의 예로는, HDD(Hard Disk Drive), SSD(Solid State Disk), SDD(Silicon Disk Drive), ROM, RAM, CD-ROM, 자기 테이프, 플로피 디스크, 광 데이터 저장 장치 등이 있다. 또한, 상기 컴퓨터는 이동 단말기의 제어부(180)를 포함할 수도 있다. 따라서, 상기의 상세한 설명은 모든 면에서 제한적으로 해석되어서는 아니 되고 예시적인 것으로 고려되어야 한다. 본 발명의 범위는 첨부된 청구항의 합리적 해석에 의해 결정되어야 하고, 본 발명의 등가적 범위 내에서의 모든 변경은 본 발명의 범위에 포함된다.The present invention described above can be embodied as computer readable codes on a medium in which a program is recorded. The computer-readable medium includes all kinds of recording devices in which data that can be read by a computer system is stored. Examples of computer-readable media include hard disk drives (HDDs), solid state disks (SSDs), silicon disk drives (SDDs), ROMs, RAMs, CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, and the like. There is this. In addition, the computer may include a controller 180 of the mobile terminal. Accordingly, the above detailed description should not be interpreted as limiting in all aspects and should be considered as illustrative. The scope of the invention should be determined by reasonable interpretation of the appended claims, and all changes within the equivalent scope of the invention are included in the scope of the invention.

Claims (1)

  1. 메이크업 평가 시스템에 있어서,In the makeup evaluation system,
    얼굴 이미지를 촬영하고, 상기 촬영한 얼굴 이미지를 메이크업 서버로 전송하는 이동 단말기; 및A mobile terminal for photographing a face image and transmitting the photographed face image to a makeup server; And
    메이크업 점수 데이터를 저장하고 있고,Store the makeup score data,
    상기 이동 단말기로부터 상기 얼굴 이미지를 수신하면 상기 얼굴 이미지에서 적어도 하나 이상의 얼굴 영역을 검출하고, 상기 메이크업 점수 데이터에 기초하여 상기 검출된 얼굴 영역별 메이크업 점수를 산출하고,Upon receiving the face image from the mobile terminal, at least one or more face regions are detected from the face image, and a makeup score for each detected face region is calculated based on the makeup score data.
    상기 산출된 메이크업 점수를 상기 이동 단말기로 전송하는 메이크업 서버를 포함하고,A makeup server for transmitting the calculated makeup score to the mobile terminal,
    상기 메이크업 서버는The makeup server
    상기 이동 단말기로부터 메이크업 주제를 수신하면 상기 메이크업 주제에 따라 상기 메이크업 점수를 산출하고,When the makeup subject is received from the mobile terminal, the makeup score is calculated according to the makeup subject,
    상기 메이크업 점수는The makeup score is
    상기 검출된 얼굴 영역의 모양 및 상기 메이크업 주제에 따라 상이하게 산출되는Calculated differently according to the shape of the detected face area and the makeup subject
    메이크업 평가 시스템.Makeup Rating System.
PCT/KR2018/001412 2017-02-01 2018-02-01 Makeup evaluation system and operation method thereof WO2018143707A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
JP2019541436A JP7020626B2 (en) 2017-02-01 2018-02-01 Makeup evaluation system and its operation method
CN201880008880.3A CN110235169B (en) 2017-02-01 2018-02-01 Cosmetic evaluation system and operation method thereof
EP18748560.2A EP3579176A4 (en) 2017-02-01 2018-02-01 Makeup evaluation system and operation method thereof
US16/482,511 US11113511B2 (en) 2017-02-01 2018-02-01 Makeup evaluation system and operating method thereof
US17/389,108 US11748980B2 (en) 2017-02-01 2021-07-29 Makeup evaluation system and operating method thereof

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR1020170014403A KR101872635B1 (en) 2017-02-01 2017-02-01 Automatic make-up evaluation system and operating method thereof
KR10-2017-0014403 2017-02-01
KR10-2018-0012931 2018-02-01
KR1020180012931A KR102066892B1 (en) 2018-02-01 2018-02-01 Make-up evaluation system and operating method thereof

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US16/482,511 A-371-Of-International US11113511B2 (en) 2017-02-01 2018-02-01 Makeup evaluation system and operating method thereof
US17/389,108 Continuation US11748980B2 (en) 2017-02-01 2021-07-29 Makeup evaluation system and operating method thereof

Publications (1)

Publication Number Publication Date
WO2018143707A1 true WO2018143707A1 (en) 2018-08-09

Family

ID=63040956

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/001412 WO2018143707A1 (en) 2017-02-01 2018-02-01 Makeup evaluation system and operation method thereof

Country Status (3)

Country Link
US (2) US11113511B2 (en)
JP (1) JP7020626B2 (en)
WO (1) WO2018143707A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200134371A1 (en) * 2018-10-25 2020-04-30 L'oreal Systems and methods for providing personalized product recommendations using deep learning

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3407290A4 (en) * 2016-01-22 2018-11-28 Panasonic Intellectual Property Management Co., Ltd. Makeup trend analysis device, makeup trend analysis method, and makeup trend analysis program
GB2569655B (en) * 2017-12-22 2022-05-11 Jemella Ltd Training system and device
US10755447B2 (en) * 2018-04-19 2020-08-25 Adobe Inc. Makeup identification using deep learning
US11222719B2 (en) 2019-12-28 2022-01-11 Kpn Innovations, Llc Methods and systems for informing product decisions
KR20220157502A (en) 2020-03-31 2022-11-29 스냅 인코포레이티드 Augmented Reality Beauty Product Tutorials
CN111553220A (en) * 2020-04-21 2020-08-18 海信集团有限公司 Intelligent device and data processing method
EP4150514A1 (en) * 2020-06-30 2023-03-22 L'Oréal High-resolution controllable face aging with spatially-aware conditional gans
US20220202168A1 (en) * 2020-12-30 2022-06-30 L'oreal Digital makeup palette
CN113194323B (en) * 2021-04-27 2023-11-10 口碑(上海)信息技术有限公司 Information interaction method, multimedia information interaction method and device
CN115170894B (en) * 2022-09-05 2023-07-25 深圳比特微电子科技有限公司 Method and device for detecting smoke and fire
US11837019B1 (en) * 2023-09-26 2023-12-05 Dauntless Labs, Llc Evaluating face recognition algorithms in view of image classification features affected by smart makeup

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060062263A (en) * 2004-12-03 2006-06-12 엘지전자 주식회사 Apparatus and method for providing make-up information using the mobile phone
KR20110127396A (en) * 2010-05-19 2011-11-25 삼성전자주식회사 Method and apparatus for providing a virtual make-up function of a portable terminal
KR20140061604A (en) * 2012-11-13 2014-05-22 김지원 Method for guiding make-up by using mobile terminal and mobiel terminal using the same
JP2014147561A (en) * 2013-02-01 2014-08-21 Panasonic Corp Makeup support device, makeup support system and makeup support method
JP2014149696A (en) * 2013-02-01 2014-08-21 Panasonic Corp Makeup support apparatus, makeup support method and makeup support program

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3490910B2 (en) 1998-09-28 2004-01-26 三洋電機株式会社 Face area detection device
JP2002324126A (en) 2001-04-25 2002-11-08 Sharp Corp Providing system for make-up advise information
JP5261586B2 (en) 2007-08-10 2013-08-14 株式会社 資生堂 Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program
JP2009064423A (en) 2007-08-10 2009-03-26 Shiseido Co Ltd Makeup simulation system, makeup simulation device, makeup simulation method, and makeup simulation program
JP2009213751A (en) * 2008-03-12 2009-09-24 Sony Ericsson Mobilecommunications Japan Inc Program, method, and device for makeup evaluation
JP5656603B2 (en) * 2010-12-14 2015-01-21 キヤノン株式会社 Information processing apparatus, information processing method, and program thereof
KR101733512B1 (en) 2010-12-22 2017-05-10 에스케이플래닛 주식회사 Virtual experience system based on facial feature and method therefore
US20130058543A1 (en) * 2011-09-06 2013-03-07 The Proctor & Gamble Company Systems, devices, and methods for image analysis
JP5818714B2 (en) 2012-02-20 2015-11-18 花王株式会社 Makeup face image evaluation apparatus and makeup face image evaluation method
WO2014118842A1 (en) 2013-02-01 2014-08-07 パナソニック株式会社 Makeup assistance device, makeup assistance system, makeup assistance method, and makeup assistance program
JP6288404B2 (en) * 2013-02-28 2018-03-07 パナソニックIpマネジメント株式会社 Makeup support device, makeup support method, and makeup support program
CN105263399B (en) * 2013-06-07 2017-06-23 富士胶片株式会社 Transparent sensation evaluating device, transparent appraisal method
US10339685B2 (en) 2014-02-23 2019-07-02 Northeastern University System for beauty, cosmetic, and fashion analysis
WO2015127354A1 (en) 2014-02-24 2015-08-27 Siemens Healthcare Diagnostics Inc. Potentiometric sensor, kit and method of use
JP6128356B2 (en) 2016-01-22 2017-05-17 パナソニックIpマネジメント株式会社 Makeup support device and makeup support method
JP6128357B2 (en) 2016-01-22 2017-05-17 パナソニックIpマネジメント株式会社 Makeup support device and makeup support method
EP3407290A4 (en) * 2016-01-22 2018-11-28 Panasonic Intellectual Property Management Co., Ltd. Makeup trend analysis device, makeup trend analysis method, and makeup trend analysis program
JP6731616B2 (en) * 2016-06-10 2020-07-29 パナソニックIpマネジメント株式会社 Virtual makeup device, virtual makeup method, and virtual makeup program
JP6986676B2 (en) * 2016-12-28 2021-12-22 パナソニックIpマネジメント株式会社 Cosmetic presentation system, cosmetic presentation method, and cosmetic presentation server
KR101872635B1 (en) 2017-02-01 2018-06-29 주식회사 엘지생활건강 Automatic make-up evaluation system and operating method thereof
US10433630B2 (en) * 2018-01-05 2019-10-08 L'oreal Cosmetic applicator system including trackable cosmetic device, and client device to assist users in makeup application
US10691932B2 (en) * 2018-02-06 2020-06-23 Perfect Corp. Systems and methods for generating and analyzing user behavior metrics during makeup consultation sessions

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060062263A (en) * 2004-12-03 2006-06-12 엘지전자 주식회사 Apparatus and method for providing make-up information using the mobile phone
KR20110127396A (en) * 2010-05-19 2011-11-25 삼성전자주식회사 Method and apparatus for providing a virtual make-up function of a portable terminal
KR20140061604A (en) * 2012-11-13 2014-05-22 김지원 Method for guiding make-up by using mobile terminal and mobiel terminal using the same
JP2014147561A (en) * 2013-02-01 2014-08-21 Panasonic Corp Makeup support device, makeup support system and makeup support method
JP2014149696A (en) * 2013-02-01 2014-08-21 Panasonic Corp Makeup support apparatus, makeup support method and makeup support program

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200134371A1 (en) * 2018-10-25 2020-04-30 L'oreal Systems and methods for providing personalized product recommendations using deep learning
WO2020084057A1 (en) * 2018-10-25 2020-04-30 L'oreal Systems and methods for providing personalized product recommendations using deep learning
US11010636B2 (en) 2018-10-25 2021-05-18 L'oreal Systems and methods for providing personalized product recommendations using deep learning
CN112889065A (en) * 2018-10-25 2021-06-01 莱雅公司 System and method for providing personalized product recommendations using deep learning
KR20210073566A (en) * 2018-10-25 2021-06-18 로레알 Systems and Methods for Providing Customized Product Recommendations Using Deep Learning
US11521013B2 (en) 2018-10-25 2022-12-06 L'oreal Systems and methods for providing personalized product recommendations using deep learning
KR102616487B1 (en) * 2018-10-25 2023-12-20 로레알 Systems and methods for providing personalized product recommendations using deep learning

Also Published As

Publication number Publication date
US11113511B2 (en) 2021-09-07
JP7020626B2 (en) 2022-02-16
JP2020505989A (en) 2020-02-27
US11748980B2 (en) 2023-09-05
US20210357627A1 (en) 2021-11-18
US20190362134A1 (en) 2019-11-28

Similar Documents

Publication Publication Date Title
WO2018143707A1 (en) Makeup evaluation system and operation method thereof
WO2021132851A1 (en) Electronic device, scalp care system, and control method for same
WO2018088794A2 (en) Method for correcting image by device and device therefor
WO2018016837A1 (en) Method and apparatus for iris recognition
WO2013009020A2 (en) Method and apparatus for generating viewer face-tracing information, recording medium for same, and three-dimensional display apparatus
WO2020246844A1 (en) Device control method, conflict processing method, corresponding apparatus and electronic device
WO2020213750A1 (en) Artificial intelligence device for recognizing object, and method therefor
WO2019216593A1 (en) Method and apparatus for pose processing
WO2016195275A1 (en) Method and device for providing make-up mirror
WO2015137788A1 (en) Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium
WO2016129934A1 (en) Handwriting recognition method and apparatus
WO2017039348A1 (en) Image capturing apparatus and operating method thereof
WO2017111234A1 (en) Method for electronic device to control object and electronic device
WO2016080708A1 (en) Wearable device and method for outputting virtual image
WO2016048102A1 (en) Image display method performed by device including switchable mirror and the device
WO2015133699A1 (en) Object recognition apparatus, and recording medium in which method and computer program therefor are recorded
EP3740936A1 (en) Method and apparatus for pose processing
WO2021006366A1 (en) Artificial intelligence device for adjusting color of display panel, and method therefor
WO2019085495A1 (en) Micro-expression recognition method, apparatus and system, and computer-readable storage medium
EP3440593A1 (en) Method and apparatus for iris recognition
WO2013022226A4 (en) Method and apparatus for generating personal information of client, recording medium thereof, and pos system
EP3198376A1 (en) Image display method performed by device including switchable mirror and the device
WO2019225961A1 (en) Electronic device for outputting response to speech input by using application and operation method thereof
WO2020117006A1 (en) Ai-based face recognition system
WO2019135621A1 (en) Video playback device and control method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18748560

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019541436

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018748560

Country of ref document: EP

Effective date: 20190902