WO2023239110A1 - Dispositif d'expérience virtuelle pour recommander un style personnalisé à un utilisateur - Google Patents

Dispositif d'expérience virtuelle pour recommander un style personnalisé à un utilisateur Download PDF

Info

Publication number
WO2023239110A1
WO2023239110A1 PCT/KR2023/007513 KR2023007513W WO2023239110A1 WO 2023239110 A1 WO2023239110 A1 WO 2023239110A1 KR 2023007513 W KR2023007513 W KR 2023007513W WO 2023239110 A1 WO2023239110 A1 WO 2023239110A1
Authority
WO
WIPO (PCT)
Prior art keywords
skin
information
user
styling
face object
Prior art date
Application number
PCT/KR2023/007513
Other languages
English (en)
Korean (ko)
Inventor
강문태
Original Assignee
주식회사 네일클럽
강문태
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 네일클럽, 강문태 filed Critical 주식회사 네일클럽
Publication of WO2023239110A1 publication Critical patent/WO2023239110A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Definitions

  • Korean Patent Publication No. 10-2020-0045759 Personal Color Matching Styling System
  • Korean Patent Publication No. 10-2020-0045759 analyzes the information entered by the user to create a user avatar, diagnoses the personal color, and analyzes the keywords entered by the user to create a personal image diagnosed in the avatar.
  • a technology that recommends customized styling based on color has been disclosed.
  • the present invention is a virtual experience device that recommends a customized style to the user, and when the user's skin image information is acquired through the first camera located in one area of the body of the virtual experience device, the user's skin image information is obtained through a pre-stored first artificial intelligence algorithm.
  • identifying the skin tone and obtaining customized color tone information for the identified skin tone and obtaining model data obtained by photographing the user's head through a second camera, a facial object, which is a 3D three-dimensional model, is created through the model data; ,
  • a facial object which is a 3D three-dimensional model
  • the virtual experience device for recommending a customized style to a user implemented in a computing device including one or more processors according to an embodiment of the present invention and one or more memories storing instructions executable by the processor, located in one area of the main body
  • the first camera captures the user's facial skin and obtains skin image information reflecting the user's skin condition
  • the skin image information is analyzed through a pre-stored first artificial intelligence algorithm to identify the user's skin tone.
  • a user skin analysis unit that acquires customized color tone information corresponding to the identified skin tone;
  • the function of the user's skin analysis unit is completed and the second camera located in one area of the main body captures the user's head and obtains model data, which is 3D scan data, the user's skin is analyzed based on the obtained model data.
  • a face object generator that generates a first face object that is a 3D model of the head and reflects the identified skin tone to the first face object;
  • a reference face object and the second face object based on a plurality of pre-stored reference shape information are created through a pre-stored second artificial intelligence algorithm.
  • a styling information acquisition unit that compares and confirms reference shape information whose similarity rate is greater than or equal to a specified value, and obtains styling information corresponding to the confirmed reference shape information; And when the acquisition of the customized color tone information and the styling information is completed, a third face object is created by reflecting the style based on the customized color tone information and the styling information in the second face object, and the generated third face object is Characterized in that it includes a style recommendation unit that outputs through a display located in one area of the main body and recommends a customized style to the user.
  • the user skin analysis unit controls a lighting module located in one area of the first camera to generate first image information, which is image information about the outer layer of the facial skin.
  • first image information which is image information about the outer layer of the facial skin.
  • second image information which is image information about the basal layer of the skin;
  • a skin type identification unit that overlaps an image based on the first image information and an image based on the second image information and identifies the user's skin type through the overlapped image; And while the function of the skin type identification unit is in progress, the skin color corresponding to the average value of the first skin color based on the first image information and the second skin color based on the second image information is converted to the user's skin tone.
  • a customized color tone information determination unit that identifies and determines at least one color tone information matching the user's skin tone among a plurality of pre-stored color tone information as customized color tone information customized to the user's skin tone; including, the skin tone of the user
  • the types preferably include normal type, dry type, oily type, sensitive type, pigmentation type, atopic type, dermatitis type, and sebum type.
  • the skin image acquisition unit controls the lighting module to increase the amount of light irradiated to the facial skin, allowing the first camera to capture the lower surface layer of the facial skin.
  • the first camera acquires third image information including hair and the basal layer of the facial skin by photographing the basal layer of the facial skin in a state in which the light quantity is increased by the light quantity controller, the third camera It may be possible to include a second image acquisition unit that removes hair from the facial skin from an image based on image information and obtains the second image information including only the basal layer.
  • the facial object generator starts a modeling process and creates the first virtual 3D model corresponding to the shape of the user's head based on the acquired model data.
  • an object modeling unit that generates a face object;
  • the user's skin tone identified by the user skin analysis unit is reflected in the first face object, and then a type object based on the user's skin type based on the overlapping image.
  • the type object is formed in a form corresponding to dry trouble, oily trouble, sensitive trouble, pigmentation trouble, atopic trouble, dermatitis trouble, and sebum trouble formed on the user's facial skin based on the overlapping image, and is displayed on the first face. It is possible that it is a virtual object that is reflected in the object.
  • the styling information acquisition unit When the generation of the second face object is completed, the styling information acquisition unit generates reference face objects based on each of a plurality of pre-stored reference shape information through a pre-stored second artificial intelligence algorithm, and generates each of the generated reference face objects. and a comparative analysis process start unit that starts a comparative analysis process for the second facial object; When the comparative analysis process starts, each of the reference face objects is compared with the second face object to determine whether there is a reference face object among the reference face objects whose similarity rate with the second face object is greater than or equal to a specified value.
  • a similarity rate confirmation unit if there is a reference face object among the reference face objects whose similarity rate with the second face object is more than a specified value, identifying styling information matched to the reference shape information corresponding to the reference face object whose similarity rate is more than the specified value. It is possible to include a styling information identification unit.
  • the plurality of pre-stored reference shape information is source information used to implement a facial object that is a 3D model corresponding to the head of a celebrity or other user, and it is possible that styling information is matched to each piece of information. do.
  • the pre-stored second artificial intelligence algorithm machine-learns the correlation between the plurality of pre-stored reference shape information and the styling information matched to each of the pre-stored plurality of reference shape information, and determines the pre-stored plurality of criteria in an external database. It is possible to receive new styling information for each piece of shape information and update the matching relationship for the plurality of previously stored reference shape information.
  • the styling information is a configuration that matches each of the plurality of pre-stored reference styling information, and includes a hair styling object and a makeup styling object for changing the hairstyle and makeup style of the second face object,
  • the style recommendation unit includes: a styling object reflection unit that reflects a hair styling object and a makeup styling object based on the styling information to the second face object when acquisition of the styling information is completed; And when the function of the styling object reflection unit is completed, the color based on the customized color tone information is reflected in the makeup styling object to generate a third face object, and the third face object is output through the display to the user. It is possible to include a guide information output unit that recommends a customized style to the user and also outputs makeup guide information for guiding a makeup method corresponding to the makeup styling object reflected in the second face object.
  • the present inventor's virtual experience device that recommends a customized style to a user is a facial object corresponding to the user's head with a suitable styling object based on the shape of the user's head (e.g., eyebrows, eyes, nose, lips, ears, face shape, and head shape). (e.g. hair styling object and makeup styling object) and customized color tone information matching the user's skin tone, various styles customized to the user can be recommended.
  • a suitable styling object based on the shape of the user's head (e.g., eyebrows, eyes, nose, lips, ears, face shape, and head shape).
  • hair styling object and makeup styling object e.g. hair styling object and makeup styling object
  • customized color tone information matching the user's skin tone e.g. hair styling object and makeup styling object
  • FIG. 1 is a block diagram for explaining a virtual experience device that recommends a customized style to a user according to an embodiment of the present invention.
  • Figure 2 is a block diagram illustrating a user skin analysis unit of a virtual experience device through a virtual experience device that recommends a customized style to a user according to an embodiment of the present invention.
  • FIG. 4 is a block diagram illustrating a facial object creation unit of a virtual experience device through a virtual experience device that recommends a customized style to a user according to an embodiment of the present invention.
  • Figure 5 is a block diagram illustrating a styling information acquisition unit of a virtual experience device through a virtual experience device that recommends a customized style to a user according to an embodiment of the present invention.
  • FIG. 7 is a diagram for explaining an example of the internal configuration of a computing device according to an embodiment of the present invention.
  • first, second, etc. may be used to describe various components, but the components are not limited by the terms. The above terms are used only for the purpose of distinguishing one component from another. For example, a first component may be referred to as a second component, and similarly, the second component may be referred to as a first component without departing from the scope of the present invention.
  • the term and/or includes any of a plurality of related stated items or a combination of a plurality of related stated items.
  • FIG. 1 is a block diagram for explaining a virtual experience device that recommends a customized style to a user according to an embodiment of the present invention.
  • a virtual experience device 100 (hereinafter referred to as a virtual experience device) for recommending a customized style to a user is implemented in a computing device including one or more processors and one or more memories storing instructions executable by the processor. ) may include a user skin analysis unit 101, a face object creation unit 103, a styling information acquisition unit 105, and a style recommendation unit 107.
  • the user skin analysis unit 101 when the first camera 101a located in one area of the main body photographs the user's facial skin and obtains skin image information reflecting the user's skin condition, pre-stored By analyzing the skin image information through the first artificial intelligence algorithm 101b, the user's skin tone can be identified and customized color tone information corresponding to the identified skin tone can be obtained.
  • the first camera 101a can photograph not only the user's facial skin, but also the skin of other body parts other than the user's facial area.
  • the virtual experience device 100 measures the oil and moisture status of the user's skin as well as the lighting module in one area in order to check the user's skin condition by photographing not only the user's facial skin but also the skin of other body parts.
  • a sensor capable of detection may be located in one area.
  • the skin image information is information generated by photographing the user's facial skin and may include at least one of an image and a video.
  • the skin image information may include first image information that is image information about the outer layer of the user's facial skin and second image information that is image information about the base layer of the user's facial skin.
  • the user skin analysis unit 101 identifies (or determines) the average value of the skin color based on the first image information and the skin color based on the second image information as the skin tone for the user's facial skin. )can do.
  • the facial object creation unit 103 completes the function of the user skin analysis unit 101
  • the second camera 103a located in one area of the main body photographs the user's head and
  • model data which is 3D scan data
  • a first facial object which is a 3D stereoscopic model
  • the face object creation unit 103 completes generating the first face object
  • the user's skin tone identified by the user skin analysis unit 101 may be reflected in the first face object.
  • the second camera 103a may be configured to photograph (or scan) the user's head. Accordingly, the second camera 103a scans (or photographs) the user's head and generates model data that can generate a 3D stereoscopic model corresponding to the shape and position of the user's facial features, face shape, and head shape. Can be acquired (or created).
  • the model data may be source information for implementing the user's head as a 3D model in a virtual space.
  • the facial object generator 103 creates a 3D stereoscopic model of the user's head based on the acquired model data.
  • the first face object may refer to a virtual object corresponding to the shape and position of the eyebrows, eyes, nose, ears, face shape, and lips included in the user's head, and the shape of the head (or hair). That is, the first facial object may be a 3D model formed in a shape corresponding to the user's head.
  • the face object generator 103 may reflect a skin color based on a skin code (eg, #D99164) corresponding to the user's skin tone to the first face object.
  • a skin code eg, #D99164
  • the plurality of pre-stored reference shape information is source information for implementing facial objects of celebrities or other users in a virtual space
  • styling information may be matched to each reference shape information. You can.
  • the styling information matched to each of the reference shape information may be information including hairstyle and makeup information suitable for the shape of the reference face object based on each of the reference shape information.
  • the styling information acquisition unit 105 may generate a face object corresponding to a plurality of pre-stored reference shape information based on a pre-stored second artificial intelligence algorithm and implement the facial object in a virtual space.
  • the styling information acquisition unit 105 may compare it with the user's second face object 103b.
  • the styling information acquisition unit 105 selects a face object whose similarity rate with the user's second face object 103b is greater than or equal to a specified value among the face objects corresponding to each of the plurality of pre-stored reference shape information. You can identify and check shape information corresponding to the identified face object.
  • the styling information acquisition unit 105 determines the similarity rate with the user's second facial object 103b in the facial object corresponding to each of the plurality of pre-stored reference shape information, and determines the eyebrows and facial features respectively. You can check the similarity rate based on the shape, location, and angle, and you can check the similarity rate based on the shape of the face and head.
  • the makeup information may include makeup name information, makeup guide (eg, guide video) information, and makeup product information.
  • the makeup style information may include a makeup styling object.
  • the makeup styling object may be a virtual object for reflecting the same makeup style as that of another user on the second face object 103b.
  • the pre-stored second artificial intelligence algorithm may be an algorithm for checking the similarity rate by comparing a reference face object based on a plurality of pre-stored reference shape information with the user's second facial object 103b.
  • the pre-stored second artificial intelligence algorithm may be an algorithm that analyzes and learns the correlation between the plurality of pre-stored reference shape information and styling information matched to each of the pre-stored plurality of reference shape information.
  • the pre-stored second artificial intelligence algorithm can analyze and learn the correlation through a plurality of pre-stored reference shape information for other users and celebrities.
  • the pre-stored second artificial intelligence algorithm may receive new styling information from an external database and match the new styling information to each of the plurality of pre-stored reference shape information based on learning results.
  • the pre-stored second artificial intelligence algorithm may include at least one of a supervised learning algorithm, a semi-supervised learning algorithm, and an unsupervised learning algorithm, but is not limited thereto.
  • the pre-stored second artificial intelligence algorithm may be an algorithm that receives new styling information based on learning results and learns the relationship between a plurality of pre-stored reference shape information and styling information in order to match it to a plurality of pre-stored reference shape information.
  • the pre-stored artificial intelligence algorithm may include an artificial neural network model (ANN model), a convolution neural network model (CNN model), and a recurrent neural network model (RNN model), and may also include algorithms of various models. You can.
  • ANN model artificial neural network model
  • CNN model convolution neural network model
  • RNN model recurrent neural network model
  • the style recommendation unit 107 when the style recommendation unit 107 completes acquisition of the customized color tone information and styling information, the style recommendation unit 107 creates a third face reflecting a style based on the customized color tone information and the styling information in the second face object 103b.
  • the created third facial object 107a can be output through a display located in one area of the main body to recommend a customized style to the user.
  • various third face objects 107a can be generated by reflecting various styles in the second face object 103b.
  • Figure 2 is a block diagram illustrating a user skin analysis unit of a virtual experience device through a virtual experience device that recommends a customized style to a user according to an embodiment of the present invention.
  • the user skin analysis unit 200 uses a first camera 201a (e.g., the first camera 101a in FIG. 1) located in one area of the main body to photograph the user's facial skin and analyze the user's skin.
  • a first camera 201a e.g., the first camera 101a in FIG. 1
  • the skin image information can be analyzed through a pre-stored first artificial intelligence algorithm to identify the user's skin tone and obtain customized color tone information corresponding to the identified skin tone. there is.
  • the user skin analysis unit 200 has a detailed configuration for performing the above-described functions, including a skin image acquisition unit 201, an analysis process start unit 203, a skin type identification unit 205, and It may include a customized color tone information determination unit 207.
  • the skin image acquisition unit 201 controls the lighting module located in one area of the first camera 201a to control the facial skin.
  • First image information 201b which is image information about the outer layer of the skin
  • second image information 201c which is image information about the base layer of the skin
  • the first camera 201a may have a lighting module located in one area. Accordingly, the capturing area (eg, skin layer) of the user's facial skin captured by the first camera 201a may change depending on the amount of light emitted from the lighting module.
  • the capturing area eg, skin layer
  • the analysis process start unit 203 completes the acquisition of the first image information 201b and the second image information 201c
  • the first image information 201c is processed through a pre-stored first artificial intelligence algorithm.
  • a skin condition analysis process may be started by analyzing the image information 201b and the second image information 201c.
  • the skin condition analysis process may be a process performed to identify the skin tone of the user's facial skin and identify the user's skin type 203a.
  • the user's skin type 203a may include normal type, dry type, oily type, sensitive type, pigmentation type, atopic type, dermatitis type, and sebum type. That is, the skin condition analysis process identifies the user's skin type (203a) by analyzing the first image information (201b) and the second image information (201c) through the pre-stored first artificial intelligence algorithm. At the same time, it may be a process of identifying the user's skin tone.
  • the skin type identification unit 205 overlaps an image based on the first image information 201b and an image based on the second image information 201c to , the user's skin type 203a can be identified through the overlapping images.
  • the skin type identification unit 205 may overlap an image based on the first image information of user “A” and an image based on the second image information, and analyze the overlapped image.
  • the skin type identification unit 205 may confirm that an image object corresponding to pigmentation exists in the overlapping image by analyzing the overlapping image using the pre-stored first artificial intelligence algorithm. Accordingly, the skin type identification unit 205 can confirm that the user's skin type 203a is a pigmentation type.
  • the customized color tone information determination unit 207 determines the first skin color and the second skin color based on the first image information 201b while the function of the skin type identification unit 205 is being performed.
  • the skin color corresponding to the average value of the second skin color based on the image information 201c may be identified as the user's skin tone.
  • the customized color tone information determination unit 207 may obtain the average value of the first skin color and the second skin color from the previously stored skin color data table 205a. Accordingly, the customized color tone information determination unit 207 may extract skin color information corresponding to the obtained average value from the previously stored skin color data table 205a.
  • the customized color tone information determination unit 207 when the customized color tone information determination unit 207 completes the extraction of the skin color information, it may identify the skin color based on the extracted skin color information as the user's skin tone.
  • the customized color tone information determination unit 207 determines at least one of the plurality of color tone information 207a previously stored through the first artificial intelligence algorithm 209 that matches the skin tone 205a of the identified user.
  • One piece of color tone information can be determined as customized color tone information tailored to the user's skin tone.
  • the customized color tone information determination unit 207 may identify the series of the identified user's skin tone 205a through a pre-stored first artificial intelligence algorithm 209.
  • the customized tone information determination unit 207 selects at least one tone information corresponding to a series and detailed series corresponding to the user's skin tone among the plurality of pre-stored tone information 207a for the user's skin tone. This can be decided with customized color tone information. Skin color information according to the series and detailed series may be matched to each of the plurality of color tone information.
  • Figure 3 is a block diagram illustrating a skin image acquisition unit of a virtual experience device through a virtual experience device that recommends a customized style to a user according to an embodiment of the present invention.
  • a virtual experience device that recommends a customized style to a user may include a skin image acquisition unit 300 (eg, the skin image acquisition unit 201 of FIG. 2).
  • the skin image acquisition unit 300 controls the lighting module located in one area of the first camera to detect the outer layer of the facial skin.
  • First image information which is image information
  • second image information which is image information about the basal layer of the skin
  • the skin image acquisition unit 300 may include a light quantity control unit 301 and a second image acquisition unit 303 as detailed components for performing the above-described functions.
  • the light quantity control unit 301 controls the lighting module located in one area of the first camera to detect sebum using the light irradiated on the facial skin. You can change the light as much as possible. Accordingly, the first camera may acquire image information 305c showing sebum formed on the user's facial skin. The image information on the sebum formation may be information used to distinguish the user's skin type.
  • the light quantity control unit 301 may control the lighting module to adjust the amount of light irradiated to the user's skin, thereby allowing the first camera to acquire data capturing the user's skin.
  • the virtual experience device e.g., the virtual experience device 100 in FIG. 1 identifies the tone of the user's skin through data acquired through the first camera and determines what type of skin the user has. can be identified.
  • the second image acquisition unit 303 captures the hair and basal layer of the facial skin by using the first camera to photograph the basal layer of the facial skin while the light quantity is increased by the light quantity control unit 301.
  • third image information including this hair on the facial skin can be removed from the image based on the third image information to obtain second image information including only the basal layer.
  • the second image acquisition unit 303 may remove an image object corresponding to hair from an image based on the third image information.
  • FIG. 4 is a block diagram illustrating a facial object creation unit of a virtual experience device through a virtual experience device that recommends a customized style to a user according to an embodiment of the present invention.
  • a virtual experience device that recommends a customized style to a user may include a face object generator 400 (eg, the face object generator 103 in FIG. 1 ).
  • the face object creation unit 400 uses a second camera located in one area of the main body when the function of the user skin analysis unit (e.g., the user skin analysis unit 101 in FIG. 1) is completed.
  • model data which is 3D scan data
  • a first facial object which is a 3D stereoscopic model of the user's head
  • the skin tone may be reflected in the first facial object.
  • the face object creation unit 400 may include an object modeling unit 401 and a second face object creation unit 403 as detailed components for performing the above-described functions.
  • the object modeling unit 401 when the second camera acquires model data, the object modeling unit 401 starts a modeling process and creates a virtual object corresponding to the shape of the user's head based on the acquired model data.
  • the first face object 401a which is a 3D model, can be created.
  • the object modeling unit 401 may implement the first facial object 401a corresponding to the shape of the user's head in a virtual space based on the model data.
  • the first facial object 401a may be implemented in a form that corresponds to the shape and position of the user's facial features, face shape, and head shape.
  • the second face object creation unit 403 completes the creation of the first face object 401a
  • the user's skin tone identified by the user skin analysis unit is converted into the first face object. It can be reflected in (401a). Accordingly, a skin tone of the same color as the user's skin color may be reflected in the first facial object 401a.
  • the user's skin type is confirmed by the user's skin analysis unit analyzing the overlapping image through the pre-stored first artificial intelligence algorithm, and is divided into normal type, dry type, oily type, and sensitive type. , pigmentation type, atopic type, dermatitis type, and sebum type.
  • the second face object generator 403 may reflect a type object based on the user's skin type to the first face object 401a.
  • the type object is formed in a form corresponding to dry trouble, oily trouble, sensitive trouble, pigmentation trouble, atopic trouble, dermatitis trouble, and sebum trouble formed on the user's facial skin based on the overlapping image, and is the first facial object. It may be a virtual object reflected in .
  • the second face object generator 403 For example, if there is sebum based on sebum type in the overlapping image, the second face object generator 403 generates a type object corresponding to the location and shape of the sebum to create the first face object ( 401a).
  • the second face object generator 403 can complete the creation of the second face object 403a by reflecting the type object based on the skin type to the first face object 401a.
  • Figure 5 is a block diagram illustrating a styling information acquisition unit of a virtual experience device through a virtual experience device that recommends a customized style to a user according to an embodiment of the present invention.
  • a virtual experience device that recommends a customized style to a user may include a styling information acquisition unit 500 (eg, the styling information acquisition unit 105 of FIG. 1 ).
  • the styling information acquisition unit 500 has a detailed configuration for performing the above-described functions, and includes a comparative analysis process start unit 501, a similarity rate check unit 503, and a styling information identification unit 505. may include.
  • the comparison and analysis process start unit 501 completes the creation of the second face object 503a
  • the plurality of reference shape information previously stored through the second artificial intelligence algorithm 507 ( 503b)
  • a comparative analysis process may be started for each of the generated reference face objects and the second face object 503a.
  • the comparative analysis process is a process for comparing the generated second facial object 503a with reference facial objects based on each of the plurality of pre-stored reference formation information 503b, and more precisely, The shape and position of the eyebrows, eyes, nose, ears, face shape, and lips included in the user's head based on the second face object 503a, and the shape of the head (or hair) are based on the reference face object. , it may be a process to check how similar the shape and position of the ears, face shape, and lips are to the shape of the head (or hair).
  • the similarity rate checker 503 compares each of the reference face objects with the second face object 503a, and compares the second face object 503a with each of the reference face objects. 2 You can check whether there is a reference face object whose similarity rate with the face object is more than a specified value.
  • the plurality of pre-stored reference shape information 503b is source information used to implement a facial object that is a 3D three-dimensional model corresponding to the head of a celebrity or other user, and each piece of information Styling information may be in a matched state.
  • the plurality of previously stored reference shape information 503b may include skin color based on the skin tone of the celebrity or other user.
  • the similarity rate check unit 503 compares each of the reference face objects with the second face object 503a, and the shape and position of eyebrows, eyes, nose, ears, face shape, and lips. You can check the similarity rate based on the shape of the head.
  • the similarity rate confirmation unit 503 can be used to compare not only the shape and position but also the formation angles of eyebrows, eyes, nose, ears, face shape, and lips when checking the similarity rate.
  • the similarity rate check unit 503 implements the reference face objects and the second face object 503a in a virtual space, and arranges or overlaps them to include eyebrows, eyes, nose, ears, face shape, and lips. You can check the similarity rate for the shape and location of and the shape of the head. However, the similarity rate check unit 503 only selects the reference face object having the same series (or detailed series) as the skin tone series (or detailed series) reflected in the second face object 503a among the reference face objects. You can select the comparison target and check the similarity rate.
  • the specified value is calculated based on the shape and position of the eyebrows, eyes, nose, ears, face shape, and lips of each of the reference face objects, and the shape of the head of the eyebrows, eyes, and eyes of the second face object 503a.
  • a specified value for each, or an average value of the specified values for each of the shape and position of eyebrows, eyes, nose, ears, face shape, and lips, and the shape of the head may be set.
  • the styling information identification unit 505 determines that if there is a reference face object whose similarity rate with the second face object 503a is more than a specified value among the reference face objects, the similarity rate is more than the specified value. Styling information matched to reference shape information corresponding to the reference face object can be identified.
  • the pre-stored second artificial intelligence algorithm 507 performs machine learning on the correlation between the plurality of pre-stored reference shape information 503b and styling information, and determines the pre-stored plurality of criteria in an external database. By receiving new styling information for each piece of shape information, the matching relationship for the plurality of previously stored reference shape information can be updated.
  • the styling information may already be matched to the plurality of pre-stored reference shape information 503b or may be information for newly matching.
  • the makeup guide information 603a is information about makeup methods for a user to perform makeup corresponding to the makeup styling object, including the order of makeup methods, products used for makeup methods, and when performing makeup methods.
  • Information may include precautions (e.g., prohibiting use of a product for certain skin diseases, darker shades for certain facial areas), etc.
  • FIG. 7 illustrates an example of the internal configuration of a computing device according to an embodiment of the present invention.
  • descriptions of unnecessary embodiments that overlap with the description of FIGS. 1 to 6 described above will be omitted. Do this.
  • the computing device 10000 includes at least one processor 11100, a memory 11200, a peripheral interface 11300, and an input/output subsystem ( It may include at least an I/O subsystem (11400), a power circuit (11500), and a communication circuit (11600). At this time, the computing device 10000 may correspond to a user terminal (A) connected to a tactile interface device or the computing device (B) described above.
  • the memory 11200 may include, for example, high-speed random access memory, magnetic disk, SRAM, DRAM, ROM, flash memory, or non-volatile memory. there is.
  • the memory 11200 may include software modules, instruction sets, or various other data necessary for the operation of the computing device 10000.
  • the peripheral interface 11300 may couple input and/or output peripherals of the computing device 10000 to the processor 11000 and memory 11200.
  • the processor 11000 may execute a software module or set of instructions stored in the memory 11200 to perform various functions for the computing device 10000 and process data.
  • the input/output subsystem 11400 can couple various input/output peripheral devices to the peripheral interface 11300.
  • the input/output subsystem 11400 may include a controller for coupling peripheral devices such as a monitor, keyboard, mouse, printer, or, if necessary, a touch screen or sensor to the peripheral device interface 11300.
  • peripheral devices such as a monitor, keyboard, mouse, printer, or, if necessary, a touch screen or sensor to the peripheral device interface 11300.
  • input/output peripheral devices may be coupled to the peripheral interface 11300 without going through the input/output subsystem 11400.
  • the communication circuit 11600 may enable communication with another computing device using at least one external port.
  • the communication circuit 11600 may include an RF circuit to transmit and receive RF signals, also known as electromagnetic signals, to enable communication with other computing devices.
  • FIG. 7 is only an example of the computing device 10000, and the computing device 10000 may omit some components shown in FIG. 7, further include additional components not shown in FIG. 7, or 2. It may have a configuration or arrangement that combines more than one component.
  • a computing device for a communication terminal in a mobile environment may further include a touch screen or a sensor in addition to the components shown in FIG. 7, and the communication circuit 11600 may be equipped with various communication methods (WiFi, 3G, LTE). , Bluetooth, NFC, Zigbee, etc.) may also include a circuit for RF communication.
  • Components that may be included in the computing device 10000 may be implemented as hardware, software, or a combination of both hardware and software, including one or more signal processing or application-specific integrated circuits.
  • Methods according to embodiments of the present invention may be implemented in the form of program instructions that can be executed through various computing devices and recorded on a computer-readable medium.
  • the program according to this embodiment may be composed of a PC-based program or a mobile terminal-specific application.
  • the application to which the present invention is applied can be installed on a user terminal through a file provided by a file distribution system.
  • the file distribution system may include a file transmission unit (not shown) that transmits the file according to a request from the user terminal.
  • a single processing device may be described as being used; however, those skilled in the art will understand that a processing device may include multiple processing elements and/or multiple types of processing elements. It can be seen that it may include.
  • a processing device may include a plurality of processors or one processor and one controller. Additionally, other processing configurations, such as parallel processors, are possible.
  • the method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded on a computer-readable medium.
  • the computer-readable medium may include program instructions, data files, data structures, etc., singly or in combination.
  • Program instructions recorded on the medium may be specially designed and configured for the embodiment or may be known and available to those skilled in the art of computer software.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic media such as floptical disks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Human Resources & Organizations (AREA)
  • Primary Health Care (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Development Economics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un dispositif d'expérience virtuelle pour recommander un style personnalisé à un utilisateur et, plus spécifiquement, une technologie qui fournit un style personnalisé pour les oreilles, les yeux, la bouche et le nez d'un utilisateur et la forme de visage, la forme de tête et la carnation de l'utilisateur. Lorsque les informations d'image de peau de l'utilisateur sont obtenues par l'intermédiaire d'une première caméra située dans une zone du corps principal du dispositif d'expérience virtuelle, la carnation de l'utilisateur est identifiée et des informations de tonalité de couleur personnalisées sont obtenues pour la carnation identifiée par l'intermédiaire d'un premier algorithme d'intelligence artificielle pré-stocké ; et lorsque des données de modèle obtenues par photographie de la tête de l'utilisateur par l'intermédiaire d'une seconde caméra sont obtenues, un objet de visage qui est un modèle stéréoscopique 3D est généré par l'intermédiaire des données de modèle et la carnation identifiée est réfléchie vers l'objet de visage généré et l'objet de visage est analysé par l'intermédiaire d'un second algorithme d'intelligence artificielle pré-stocké.
PCT/KR2023/007513 2022-06-10 2023-06-01 Dispositif d'expérience virtuelle pour recommander un style personnalisé à un utilisateur WO2023239110A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020220070722A KR102554058B1 (ko) 2022-06-10 2022-06-10 사용자에게 맞춤형 스타일을 추천하는 가상 체험 장치
KR10-2022-0070722 2022-06-10

Publications (1)

Publication Number Publication Date
WO2023239110A1 true WO2023239110A1 (fr) 2023-12-14

Family

ID=87160050

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/007513 WO2023239110A1 (fr) 2022-06-10 2023-06-01 Dispositif d'expérience virtuelle pour recommander un style personnalisé à un utilisateur

Country Status (2)

Country Link
KR (1) KR102554058B1 (fr)
WO (1) WO2023239110A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117596741B (zh) * 2023-12-08 2024-05-14 东莞莱姆森科技建材有限公司 一种自动调整光线的智能镜控制方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090106040A (ko) * 2008-04-04 2009-10-08 세종대학교산학협력단 다중 감각 인터페이스에 기반한 가상의 3차원 얼굴메이크업 시스템 및 방법
KR101987189B1 (ko) * 2018-12-21 2019-06-10 주식회사 트위니 메이크업 제안 키오스크
KR102185638B1 (ko) * 2019-10-31 2020-12-02 김다솜 인공지능을 이용한 패션 스타일 종합 코디네이션 제안 시스템 및 그 운용방법
KR102316723B1 (ko) * 2021-02-08 2021-10-25 주식회사 더대박컴퍼니 인공지능을 이용한 신체 맞춤형 코디 시스템

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090106040A (ko) * 2008-04-04 2009-10-08 세종대학교산학협력단 다중 감각 인터페이스에 기반한 가상의 3차원 얼굴메이크업 시스템 및 방법
KR101987189B1 (ko) * 2018-12-21 2019-06-10 주식회사 트위니 메이크업 제안 키오스크
KR102185638B1 (ko) * 2019-10-31 2020-12-02 김다솜 인공지능을 이용한 패션 스타일 종합 코디네이션 제안 시스템 및 그 운용방법
KR102316723B1 (ko) * 2021-02-08 2021-10-25 주식회사 더대박컴퍼니 인공지능을 이용한 신체 맞춤형 코디 시스템

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DONG HYUN ET AL.: "Quantification of Melanin Density at Epidermal Basal Layer by Using Confocal Scanning Laser Microscope (CSLM)", JOURNAL OF THE SOCIETY OF COSMETIC SCIENTISTS OF KOREA, vol. 40, no. 3, 1 September 2014 (2014-09-01), pages 259 - 268, XP009550934, ISSN: 1226-2587, Retrieved from the Internet <URL:https://koreascience.kr/article/JAKO201431454589001.pdf> *

Also Published As

Publication number Publication date
KR102554058B1 (ko) 2023-07-11

Similar Documents

Publication Publication Date Title
WO2023239110A1 (fr) Dispositif d&#39;expérience virtuelle pour recommander un style personnalisé à un utilisateur
WO2020180134A1 (fr) Système de correction d&#39;image et son procédé de correction d&#39;image
WO2016159523A1 (fr) Procédé d&#39;acquisition d&#39;informations biométriques et dispositif associé
WO2020032559A2 (fr) Système et procédé de diagnostic de maladie à l&#39;aide d&#39;un réseau neuronal
WO2020141907A1 (fr) Appareil de production d&#39;image permettant de produire une image en fonction d&#39;un mot clé et procédé de production d&#39;image
WO2020179995A1 (fr) Dispositif électronique et son procédé de commande
WO2019235828A1 (fr) Système de diagnostic de maladie à deux faces et méthode associée
WO2020032561A2 (fr) Système et procédé de diagnostic de maladie utilisant de multiples modèles de couleurs et un réseau neuronal
WO2020045848A1 (fr) Système et procédé pour le diagnostic d&#39;une maladie à l&#39;aide d&#39;un réseau neuronal effectuant une segmentation
WO2021153858A1 (fr) Dispositif d&#39;aide à l&#39;identification à l&#39;aide de données d&#39;image de maladies cutanées atypiques
WO2021045367A1 (fr) Procédé et programme informatique visant à déterminer un état psychologique par un processus de dessin du bénéficiaire de conseils
WO2023018285A1 (fr) Procédé et dispositif de maquillage virtuel d&#39;intelligence artificielle utilisant une reconnaissance d&#39;image multi-angle
WO2022045746A1 (fr) Appareil informatique et procédé d&#39;authentification de code de motif comprenant des informations de caractéristiques faciales
WO2021010671A2 (fr) Système de diagnostic de maladie et procédé pour réaliser une segmentation au moyen d&#39;un réseau neuronal et d&#39;un bloc non localisé
WO2021145511A1 (fr) Appareil de cuisson utilisant une intelligence artificielle et procédé de fonctionnement associé
WO2022087706A1 (fr) Procédé de détection et de segmentation de la région labiale
WO2020032560A2 (fr) Système et procédé de génération de résultats de diagnostic
WO2021225226A1 (fr) Dispositif et procédé de diagnostic d&#39;alzheimer
WO2024039058A1 (fr) Appareil de diagnostic de la peau et système de diagnostic de la peau et procédé comprenant celui-ci
WO2024014853A1 (fr) Procédé et dispositif de détection de rides faciales à l&#39;aide d&#39;un modèle de détection de rides basé sur un apprentissage profond entraîné selon un marquage semi-automatique
WO2021261688A1 (fr) Appareil et procédé d&#39;apprentissage permettant de créer une vidéo d&#39;expression d&#39;émotion, et appareil et procédé de création de vidéo d&#39;expression d&#39;émotion
WO2022019391A1 (fr) Dispositif et procédé d&#39;entraînement de modèle d&#39;analyse de style basés sur l&#39;augmentation de données
WO2020122513A1 (fr) Procédé de traitement d&#39;image bidimensionnelle et dispositif d&#39;exécution dudit procédé
WO2019088338A1 (fr) Dispositif électronique et procédé de commande associé
WO2024111728A1 (fr) Procédé et système d&#39;interaction d&#39;émotion d&#39;utilisateur pour une réalité étendue basée sur des éléments non verbaux

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23820046

Country of ref document: EP

Kind code of ref document: A1